All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sreddem , I suppose that you know that Cisco CDR Reporting and Analytics is a commercial app, in other words, you have to pay for it! Anyway, in the Splunkbase site (https://splunkbase.splunk.c... See more...
Hi @sreddem , I suppose that you know that Cisco CDR Reporting and Analytics is a commercial app, in other words, you have to pay for it! Anyway, in the Splunkbase site (https://splunkbase.splunk.com/app/669) you can find all the instructions to install and configure it. In addition, you can find additional inforation at https://community.cisco.com/t5/unified-communications-infrastructure/sending-cucm-system-logs-to-syslog-splunk/td-p/4162264 Ciao. Giuseppe
This same information has said some other places in MS documentation too. Basically (almost) all logs have some delays when you try to get those via Azure own functionality. But if you install UF then... See more...
This same information has said some other places in MS documentation too. Basically (almost) all logs have some delays when you try to get those via Azure own functionality. But if you install UF then you get those immediately.
I know this thread is old, but this information may still help. As specified in Microsoft Learn portal, "Microsoft doesn't guarantee a specific time after an event occurs for the corresponding audit... See more...
I know this thread is old, but this information may still help. As specified in Microsoft Learn portal, "Microsoft doesn't guarantee a specific time after an event occurs for the corresponding audit record to be returned in the results of an audit log search. For core services (such as Exchange, SharePoint, OneDrive, and Teams), audit record availability is typically 60 to 90 minutes after an event occurs. For other services, audit record availability might be longer. However, some issues that are unavoidable (such as a server outage) might occur outside of the audit service that delays the availability of audit records. For this reason, Microsoft doesn't commit to a specific time."
Hello @Nawab , Did you find an answer?
Hi Team, Greetings !! This is Srinivasa, Could you please provide Splunk with Unified Applications (CUCM) On-prem , how to configure , install documents 
can you share the  support mail address or any contacts.Because, i have tried to raise a ticket in support, but its failed.
Did you find a solution @rallapallisagar ?
Someone got a solution? 
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app.   ... See more...
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app.   During our upgrade, the required indexes were deployed on one single searchhead in the cluster and we had to "move them" to our index cluster.  We did it by our internal procedures. I am not aware that there is a clear documentation what you have to do exactly if you have this issue too.
Hi @aravind  There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this c... See more...
Hi @aravind  There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that  a) If the alert actually fired correctly b) Email accepted by the mail relay c) If the relay had any issue sending on to the final destination. At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them. You can contact Support via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. ... See more...
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. Initially, only a few users reported missing alerts. However, it has now escalated, with all members of the distribution lists no longer receiving several key reports. Only a few support team members  continue to receive alerts in their personal mailboxes, suggesting inconsistent delivery. Also just checking, is there is any suppression list blocking
Hi @livehybrid  Thanks a lot for your quick response, the solution worked nicely.   Regards, AKM
Thanks for suggesting this bro, Let me try this once and let you know what is the result.
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport... See more...
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport="8088" usehttps="off" uri="/services/collector/event" headers=["Authorization: Splunk <token>"] template="RSYSLOG_SyslogProtocol23Format" queue.filename="fwdRule1" queue.maxdiskspace="1g" queue.saveonshutdown="on" queue.type="LinkedList" action.resumeRetryCount="-1" )   The usehttps parameter controls whether the module uses HTTPS or HTTP to connect to the server. By default, it is set to on, which means HTTPS is used. Setting it to off will force the module to use HTTP. Additionally, you should use serverport instead of port to specify the port number. The behavior you're seeing is expected if you only set the port to 8088 without configuring the protocol because the default protocol is HTTPS. https://www.rsyslog.com/doc/v8-stable/configuration/modules/omhttp.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @amit2312  If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example:  To convert your rex to an automatic extract... See more...
Hi @amit2312  If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example:  To convert your rex to an automatic extraction, add the regex as a REPORT extraction or inline FIELD extraction to your props.conf: == props.conf == [yourSourcetype]REPORT-service_id = service_id_extraction == transforms.conf ==  [service_id_extraction] REGEX = Service ID - (?<Service_ID>\S+)$  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x... See more...
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455 I am trying get the Service ID value, which comes at the end of the line. Thanks a lot in advance. Regards, AKM
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: r... See more...
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: rsyslog  module(load="omhttp") action(type="omhttp"    server="172.31.25.126"    port="8088"    uri="/services/collector/event"    headers=["Authorization: Splunk <token>"]    template="RSYSLOG_SyslogProtocol23Format"    queue.filename="fwdRule1"    queue.maxdiskspace="1g"    queue.saveonshutdown="on"    queue.type="LinkedList"    action.resumeRetryCount="-1"  )### Problem:Even though I’ve explicitly configured port 8088, I get this error: omhttp: suspending ourselves due to server failure 7: Failed to connect to 172.31.25.126 port 443: No route to hostIt seems like omhttp is still trying to use *HTTPS (port 443)* instead of *plain HTTP on port 8088*.---### Questions:1. How do I force the omhttp module to use HTTP instead of HTTPS? 2. Is there a configuration parameter to explicitly set the protocol scheme (http vs https)? 3. Is this behavior expected if I just set the port to 8088 without configuring the protocol?Any insights or examples are appreciated. Thanks!
Hi, Thanks a lot for your help, it really helped. Regards, AKM
Thank you all for your reply! it helps!
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp e... See more...
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp endpoint. It looks like you set your profiler logs endpoint but didn’t include /v1/logs which is what I think is causing your exporting error.