All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @JGP  You can use the license API to collect this information if you want to present it or record it elsewhere - check out https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynami... See more...
Hi @JGP  You can use the license API to collect this information if you want to present it or record it elsewhere - check out https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/license-api  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Mridu27  Use the exact channel name as listed by PowerShell on the server. The names shown in the Event Viewer GUI might not be the programmatic names required by the Splunk forwarder. To find ... See more...
Hi @Mridu27  Use the exact channel name as listed by PowerShell on the server. The names shown in the Event Viewer GUI might not be the programmatic names required by the Splunk forwarder. To find the correct channel name for DSC Operational logs, run this PowerShell command on the Windows server: Get-WinEvent -ListLog *DSC* | Select-Object LogName   Similarly, for DNS Operational logs: Get-WinEvent -ListLog *DNS* | Select-Object LogName   Use the LogName value returned by PowerShell in your inputs.conf.   # Example inputs.conf on the Universal Forwarder # For DSC Operational logs (use the exact name found via PowerShell) [WinEventLog://Microsoft-Windows-DSC/Operational] disabled = 0 index = winevents sourcetype = WinEventLog:Microsoft-Windows-DSC/Operational # For DNS Server Operational logs (use the exact name found via PowerShell) [WinEventLog://Microsoft-Windows-DNS-Server/Operational] disabled = 0 index = winevents sourcetype = WinEventLog:Microsoft-Windows-DNS-Server/Operational The Splunk Universal Forwarder requires the precise channel name registered with the Windows Event Log service. Characters like %4 seen in some GUI tools are often display artifacts and not part of the actual channel name. Separators are typically forward slashes (/), not underscores (_), hyphens (-), or spaces. Wildcards (*) are not supported directly within the channel name specification in the stanza header. Ensure the Splunk Universal Forwarder service account has permissions to read the specified event log channels. Restart the Splunk Universal Forwarder service after modifying inputs.conf. Monitor Windows event log data Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Mridu27  Ensure that the channel name you're using matches exactly what is listed in the Event Viewer. Sometimes, even small discrepancies can cause errors. https://community.splunk.com/t5/Getting... See more...
@Mridu27  Ensure that the channel name you're using matches exactly what is listed in the Event Viewer. Sometimes, even small discrepancies can cause errors. https://community.splunk.com/t5/Getting-Data-In/Failed-to-find-Event-Log/m-p/363954   
I get that occasionally on many different logs.  Setting crcSalt usually helps.
Yes, PowerConnect costs serious money. We are way too cheap for that. Currently waiting for the switch to Hana, apparently, it will have better external file logging from what I heard so far.
Ah, OK. So you have an all-in-one instance, not a standalone indexer. In that case indeed props should be on the AIO box. But. You have DATETIME_CONFIG=NONE Quoting spec: * This setting may als... See more...
Ah, OK. So you have an all-in-one instance, not a standalone indexer. In that case indeed props should be on the AIO box. But. You have DATETIME_CONFIG=NONE Quoting spec: * This setting may also be set to "NONE" to prevent the timestamp extractor from running or "CURRENT" to assign the current system time to each event. Just remove this line from your config.
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (pl... See more...
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (please refer image). Is there any other way to get these logs into splunk as i tried it with * (wildcard) as well. message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC_Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC-Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC*' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC%4Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC Operational'   Tried almost all combinations.
It's not clear what you want to achieve but: 1. Volumewise Splunk environments (if properly designed) can process volumes up to petabytes per day range (of course not on a single server  ) so "thou... See more...
It's not clear what you want to achieve but: 1. Volumewise Splunk environments (if properly designed) can process volumes up to petabytes per day range (of course not on a single server  ) so "thousands" of tickets won't impress your Splunk. 2. Remember than Splunk is not your typical RDBMS so you might want to think deeply about what you want to achieve and what data you need to do it. Remember that once you ingest an event it's immutable.
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpoin... See more...
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpointName=ALEI5-ANURAGR^groupName=Default^channelDetail=Microsoft OneDrive Client^documentName=^filePath=C:\Users\anurag.rathore.AVOTRIXLABS\OneDrive - Scanlytics Technology\Documents\git\splunk_prod\deployment-apps\Fleet_Management_Dashboard\appserver\static\fontawesome-free-6.1.1-web\svgs\solid\flask-vial.svg^macID1=9C-5A-44-0A-26-5B^status=Success^subject=^actionId=Skipped^printerName=^recipientList=^serverDateTime=Wed Apr 23 16:13:57 UTC 2025^matchedItem=Visa^sender=^contentType=Confidential Data^dataId=Client Application^incidentOn=Wed Apr 23 16:07:38 UTC 2025^ipAddressFromClient=***.***.*.16^macID2=00-FF-58-34-31-0E^macID3=B0-FC-36-CA-1C-73^userName=anurag.rathore  it is able to extract all field correctly Except a few fields . Here documentName should be empty but it is showing this on search time.  
Is there any metric or any other way to monitor License count of AppDynamics usage rather not looking into actual License page in controller console. 
The url is  "http://127.0.0.1:8088" in log4j2  and localhost(splunk) is running on  port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same set... See more...
The url is  "http://127.0.0.1:8088" in log4j2  and localhost(splunk) is running on  port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same setting so i followed the same ,yet cannot see the logs.
Hi @fhatrick  Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https:// ... See more...
Hi @fhatrick  Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https://  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1  In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were... See more...
Hi @Ana_Smith1  In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were 20,000 characters long you would only be looking in the region of 20 megabytes of data for 1000 tickets. A single Splunk indexer is typically capable of 300 gigabytes of ingestion per day. On the other hand, I would recommend trying first with a limited dataset and then expanding out if you are concerned.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1  You can use the "Jira Issue Input Add-on" app at  https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your sea... See more...
Hi @Ana_Smith1  You can use the "Jira Issue Input Add-on" app at  https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your search criteria.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ... See more...
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ticket traffic. Does anyone have experience with sending a large number of Jira tickets (thousands or more) to Splunk on a regular basis? -Are there limits or performance issues to be aware of? -Should I split the integration by project, or is it manageable in a single pipeline? -Are there any best practices for optimizing ingestion and storage in Splunk in such cases? Any insights or shared experiences would be highly appreciated. Thanks in advance!
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) a... See more...
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) and then I have 1 indexer with /opt/splunk/etc/apps/druid_utils/default/props.conf (contents is in the first post). The inputs.conf is only on the universal forwarder(s) while the props.conf is only on the indexer.
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best ... See more...
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best practices or tools for doing this efficiently, especially across multiple Jira projects?
i think the problem itself in indexer node, but still cant find out why it can query splunk internal log
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can... See more...
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can't query any data. i try to query index = * and has no result.
Doesn't Powerconnect need a paid SAP addon? cause he said in the post how to monitor for free.