All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For some reason, I needed to share some data from an index with a different set of permissions. After a bit of research, I found that the CLONE_SOURCETYPE option could help me with this stuff. I cr... See more...
For some reason, I needed to share some data from an index with a different set of permissions. After a bit of research, I found that the CLONE_SOURCETYPE option could help me with this stuff. I created the required settings in props.conf and transforms.conf, and then pushed them to the IDXC layer. At first glance, everything seemed fine, but then I discovered that CLONE_SOURCETYPE clones all events from the original sourcetype and redirects only a few to the new one. Is that the intended behavior, or did I make serious mistakes in the configuration? I expected to see only the events matching the REGEX in the original index.   props.conf [vsi_file_esxi-syslog] LINE_BREAKER = (\n) MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = \d{1,3} TIME_PREFIX = ^<\d{1,3}> TRANSFORMS-remove_trash = vsi_file_esxi-syslog_rt0, vsi_file_esxi-syslog_ke0 TRANSFORMS-route_events = general_file_esxi-syslog_re0 transforms.conf [general_file_esxi-syslog_re0] CLONE_SOURCETYPE = general_re_esxi-syslog REGEX = FIREWALL-PKTLOG: DEST_KEY = _MetaData:Index FORMAT = general WRITE_META = true    
Sure we have some Azure functions running with C# or Java code there for we have some custom log statements they go into the Eventhub and than to Splunk but Splunk have a problem with the format whic... See more...
Sure we have some Azure functions running with C# or Java code there for we have some custom log statements they go into the Eventhub and than to Splunk but Splunk have a problem with the format which comes from the Eventhub (nested jsons) eventhough the log messages are microsoft standard...
The first thing would be to verify whether the scheduled searches were run in the first place. If they were and triggered alert actions, you should verify whether the emails were correctly sent (Ismo... See more...
The first thing would be to verify whether the scheduled searches were run in the first place. If they were and triggered alert actions, you should verify whether the emails were correctly sent (Ismo already provided links to other similar threads). Then you'll know where to start troubleshooting - if it's a Splunk issue because the mails weren't sent or if you need to search on the receiving end why they weren't delivered.
It is expected. By default Splunk sends all data to all output groups. You'd need to fiddle with event routing which can be tricky since UF normally doesn't do transforms.
Can you open with some words what you are exactly trying to do?
Have you look from splunk's internal logs if those alerts has working and try to send emails? There are some links to old answers how you could try to figure out it. Solved: Splunk stopped sending... See more...
Have you look from splunk's internal logs if those alerts has working and try to send emails? There are some links to old answers how you could try to figure out it. Solved: Splunk stopped sending Email for alerts and report... - Splunk Community How to troubleshoot why I'm not getting email aler... - Splunk Community Re: Where are the failures of sendemail logged in? - Splunk Community Sendemail not working - Splunk Community After you have check those and look if you can find answer from those and it's still issue, please show what you have in your logs about those sendemail parts. Quite often situation is that splunk has sent those alerts, but those are vanished in somewhere else.
Hi @sreddem , I suppose that you know that Cisco CDR Reporting and Analytics is a commercial app, in other words, you have to pay for it! Anyway, in the Splunkbase site (https://splunkbase.splunk.c... See more...
Hi @sreddem , I suppose that you know that Cisco CDR Reporting and Analytics is a commercial app, in other words, you have to pay for it! Anyway, in the Splunkbase site (https://splunkbase.splunk.com/app/669) you can find all the instructions to install and configure it. In addition, you can find additional inforation at https://community.cisco.com/t5/unified-communications-infrastructure/sending-cucm-system-logs-to-syslog-splunk/td-p/4162264 Ciao. Giuseppe
This same information has said some other places in MS documentation too. Basically (almost) all logs have some delays when you try to get those via Azure own functionality. But if you install UF then... See more...
This same information has said some other places in MS documentation too. Basically (almost) all logs have some delays when you try to get those via Azure own functionality. But if you install UF then you get those immediately.
I know this thread is old, but this information may still help. As specified in Microsoft Learn portal, "Microsoft doesn't guarantee a specific time after an event occurs for the corresponding audit... See more...
I know this thread is old, but this information may still help. As specified in Microsoft Learn portal, "Microsoft doesn't guarantee a specific time after an event occurs for the corresponding audit record to be returned in the results of an audit log search. For core services (such as Exchange, SharePoint, OneDrive, and Teams), audit record availability is typically 60 to 90 minutes after an event occurs. For other services, audit record availability might be longer. However, some issues that are unavoidable (such as a server outage) might occur outside of the audit service that delays the availability of audit records. For this reason, Microsoft doesn't commit to a specific time."
Hello @Nawab , Did you find an answer?
Hi Team, Greetings !! This is Srinivasa, Could you please provide Splunk with Unified Applications (CUCM) On-prem , how to configure , install documents 
can you share the  support mail address or any contacts.Because, i have tried to raise a ticket in support, but its failed.
Did you find a solution @rallapallisagar ?
Someone got a solution? 
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app.   ... See more...
We followed the following documentation: https://docs.splunk.com/Documentation/ES/8.0.40/Install/UpgradetoNewVersion There is mentioned, that you need to updated the "Splunk_TA_ForIndexer" app.   During our upgrade, the required indexes were deployed on one single searchhead in the cluster and we had to "move them" to our index cluster.  We did it by our internal procedures. I am not aware that there is a clear documentation what you have to do exactly if you have this issue too.
Hi @aravind  There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this c... See more...
Hi @aravind  There isnt a suppression list which customers can access, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that  a) If the alert actually fired correctly b) Email accepted by the mail relay c) If the relay had any issue sending on to the final destination. At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them. You can contact Support via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. ... See more...
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. Initially, only a few users reported missing alerts. However, it has now escalated, with all members of the distribution lists no longer receiving several key reports. Only a few support team members  continue to receive alerts in their personal mailboxes, suggesting inconsistent delivery. Also just checking, is there is any suppression list blocking
Hi @livehybrid  Thanks a lot for your quick response, the solution worked nicely.   Regards, AKM
Thanks for suggesting this bro, Let me try this once and let you know what is the result.
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport... See more...
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport="8088" usehttps="off" uri="/services/collector/event" headers=["Authorization: Splunk <token>"] template="RSYSLOG_SyslogProtocol23Format" queue.filename="fwdRule1" queue.maxdiskspace="1g" queue.saveonshutdown="on" queue.type="LinkedList" action.resumeRetryCount="-1" )   The usehttps parameter controls whether the module uses HTTPS or HTTP to connect to the server. By default, it is set to on, which means HTTPS is used. Setting it to off will force the module to use HTTP. Additionally, you should use serverport instead of port to specify the port number. The behavior you're seeing is expected if you only set the port to 8088 without configuring the protocol because the default protocol is HTTPS. https://www.rsyslog.com/doc/v8-stable/configuration/modules/omhttp.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing