Hi Team, Greetings !! This is Srinivasa, Could you please provide Splunk with Unified Applications (CUCM) On-prem , how to configure , install documents
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app. ...
See more...
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app. During our upgrade, the required indexes were deployed on one single searchhead in the cluster and we had to "move them" to our index cluster. We did it by our internal procedures. I am not aware that there is a clear documentation what you have to do exactly if you have this issue too.
Hi @aravind There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this c...
See more...
Hi @aravind There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that a) If the alert actually fired correctly b) Email accepted by the mail relay c) If the relay had any issue sending on to the final destination. At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them. You can contact Support via https://www.splunk.com/support Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. ...
See more...
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. Initially, only a few users reported missing alerts. However, it has now escalated, with all members of the distribution lists no longer receiving several key reports. Only a few support team members continue to receive alerts in their personal mailboxes, suggesting inconsistent delivery. Also just checking, is there is any suppression list blocking
Hi @Ramachandran To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp"
server="172.31.25.126"
serverport...
See more...
Hi @Ramachandran To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp"
server="172.31.25.126"
serverport="8088"
usehttps="off"
uri="/services/collector/event"
headers=["Authorization: Splunk <token>"]
template="RSYSLOG_SyslogProtocol23Format"
queue.filename="fwdRule1"
queue.maxdiskspace="1g"
queue.saveonshutdown="on"
queue.type="LinkedList"
action.resumeRetryCount="-1"
) The usehttps parameter controls whether the module uses HTTPS or HTTP to connect to the server. By default, it is set to on, which means HTTPS is used. Setting it to off will force the module to use HTTP. Additionally, you should use serverport instead of port to specify the port number. The behavior you're seeing is expected if you only set the port to 8088 without configuring the protocol because the default protocol is HTTPS. https://www.rsyslog.com/doc/v8-stable/configuration/modules/omhttp.html Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @amit2312 If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example: To convert your rex to an automatic extract...
See more...
Hi @amit2312 If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example: To convert your rex to an automatic extraction, add the regex as a REPORT extraction or inline FIELD extraction to your props.conf: == props.conf == [yourSourcetype]REPORT-service_id = service_id_extraction == transforms.conf ==
[service_id_extraction]
REGEX = Service ID - (?<Service_ID>\S+)$ Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x...
See more...
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455 I am trying get the Service ID value, which comes at the end of the line. Thanks a lot in advance. Regards, AKM
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: r...
See more...
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: rsyslog module(load="omhttp") action(type="omhttp" server="172.31.25.126" port="8088" uri="/services/collector/event" headers=["Authorization: Splunk <token>"] template="RSYSLOG_SyslogProtocol23Format" queue.filename="fwdRule1" queue.maxdiskspace="1g" queue.saveonshutdown="on" queue.type="LinkedList" action.resumeRetryCount="-1" )### Problem:Even though I’ve explicitly configured port 8088, I get this error: omhttp: suspending ourselves due to server failure 7: Failed to connect to 172.31.25.126 port 443: No route to hostIt seems like omhttp is still trying to use *HTTPS (port 443)* instead of *plain HTTP on port 8088*.---### Questions:1. How do I force the omhttp module to use HTTP instead of HTTPS? 2. Is there a configuration parameter to explicitly set the protocol scheme (http vs https)? 3. Is this behavior expected if I just set the port to 8088 without configuring the protocol?Any insights or examples are appreciated. Thanks!
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp e...
See more...
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp endpoint. It looks like you set your profiler logs endpoint but didn’t include /v1/logs which is what I think is causing your exporting error.
I don't believe this is correct. Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for...
See more...
I don't believe this is correct. Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for password hashing: $6 (SHA-512): This algorithm is used for hashing passwords. $7 (Encryption): This algorithm requires the splunk.secret file for decryption. This is what makes it portable and useful with automation. You can generate a password hash using splunk hash-passwd <somePassword> Then you can run something like this before you start Splunk. cat <<EOF > $SPLUNK_HOME/etc/system/local/user-seed.conf
[user_info]
USERNAME = admin
HASHED_PASSWORD = $6$TOs.jXjSRTCsfPsw$2St.t9lH9fpXd9mCEmCizWbb67gMFfBIJU37QF8wsHKSGud1QNMCuUdWkD8IFSgCZr5.W6zkjmNACGhGafQZj1
EOF Alternatively you can create and export a user-seed.conf file with the same information, put it in Ansible Vault and then have it placed in $SPLUNK_HOME/etc/system/local as part of the automation None of the hosts that user-seed.conf is being distributed to have to have the same splunk.secret since it's just hash-matching, not decrypting.
Hi @lakshman239 I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud w...
See more...
Hi @lakshman239 I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud when infact it went via some PaloAlto firewalls which were occasionally blocking, when it blocked it gave that exact error. If you're able to do the usual netcat/openssl tests that it shows your connectivity should be okay so hard to pinpoint. Check out https://community.splunk.com/t5/Deployment-Architecture/Connection-problems-with-Universal-Forwarder-for-Linux-ARM-and/m-p/232759 which has some more detail too about. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF...
See more...
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF internal logs too to logstash which I need to filter later at logstash level
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=*
| table ...
See more...
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=*
| table _time index sourcetype user *
| eval Channel=user Yet the Channel column is not even shown, even though it is in the <fields> statement. If I change the table to a fields statement or remove it completely, it works. Is there any reason you are adding the table command there? It doesn't really serve any purpose, as you are controlling display with the <fields> statement.