Yes, PowerConnect costs serious money. We are way too cheap for that. Currently waiting for the switch to Hana, apparently, it will have better external file logging from what I heard so far.
Ah, OK. So you have an all-in-one instance, not a standalone indexer. In that case indeed props should be on the AIO box. But. You have DATETIME_CONFIG=NONE Quoting spec: * This setting may als...
See more...
Ah, OK. So you have an all-in-one instance, not a standalone indexer. In that case indeed props should be on the AIO box. But. You have DATETIME_CONFIG=NONE Quoting spec: * This setting may also be set to "NONE" to prevent the timestamp
extractor from running or "CURRENT" to assign the current system time to
each event. Just remove this line from your config.
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (pl...
See more...
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (please refer image). Is there any other way to get these logs into splunk as i tried it with * (wildcard) as well. message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC_Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC-Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC*' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC%4Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC Operational' Tried almost all combinations.
It's not clear what you want to achieve but: 1. Volumewise Splunk environments (if properly designed) can process volumes up to petabytes per day range (of course not on a single server ) so "thou...
See more...
It's not clear what you want to achieve but: 1. Volumewise Splunk environments (if properly designed) can process volumes up to petabytes per day range (of course not on a single server ) so "thousands" of tickets won't impress your Splunk. 2. Remember than Splunk is not your typical RDBMS so you might want to think deeply about what you want to achieve and what data you need to do it. Remember that once you ingest an event it's immutable.
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpoin...
See more...
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpointName=ALEI5-ANURAGR^groupName=Default^channelDetail=Microsoft OneDrive Client^documentName=^filePath=C:\Users\anurag.rathore.AVOTRIXLABS\OneDrive - Scanlytics Technology\Documents\git\splunk_prod\deployment-apps\Fleet_Management_Dashboard\appserver\static\fontawesome-free-6.1.1-web\svgs\solid\flask-vial.svg^macID1=9C-5A-44-0A-26-5B^status=Success^subject=^actionId=Skipped^printerName=^recipientList=^serverDateTime=Wed Apr 23 16:13:57 UTC 2025^matchedItem=Visa^sender=^contentType=Confidential Data^dataId=Client Application^incidentOn=Wed Apr 23 16:07:38 UTC 2025^ipAddressFromClient=***.***.*.16^macID2=00-FF-58-34-31-0E^macID3=B0-FC-36-CA-1C-73^userName=anurag.rathore it is able to extract all field correctly Except a few fields . Here documentName should be empty but it is showing this on search time.
The url is "http://127.0.0.1:8088" in log4j2 and localhost(splunk) is running on port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same set...
See more...
The url is "http://127.0.0.1:8088" in log4j2 and localhost(splunk) is running on port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same setting so i followed the same ,yet cannot see the logs.
Hi @fhatrick Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https:// ...
See more...
Hi @fhatrick Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https:// Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1 In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were...
See more...
Hi @Ana_Smith1 In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were 20,000 characters long you would only be looking in the region of 20 megabytes of data for 1000 tickets. A single Splunk indexer is typically capable of 300 gigabytes of ingestion per day. On the other hand, I would recommend trying first with a limited dataset and then expanding out if you are concerned. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1 You can use the "Jira Issue Input Add-on" app at https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your sea...
See more...
Hi @Ana_Smith1 You can use the "Jira Issue Input Add-on" app at https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your search criteria. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ...
See more...
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ticket traffic. Does anyone have experience with sending a large number of Jira tickets (thousands or more) to Splunk on a regular basis? -Are there limits or performance issues to be aware of? -Should I split the integration by project, or is it manageable in a single pipeline? -Are there any best practices for optimizing ingestion and storage in Splunk in such cases? Any insights or shared experiences would be highly appreciated. Thanks in advance!
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) a...
See more...
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) and then I have 1 indexer with /opt/splunk/etc/apps/druid_utils/default/props.conf (contents is in the first post). The inputs.conf is only on the universal forwarder(s) while the props.conf is only on the indexer.
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best ...
See more...
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best practices or tools for doing this efficiently, especially across multiple Jira projects?
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can...
See more...
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can't query any data. i try to query index = * and has no result.
Hi It's like @livehybrid said. You cannot / shouldn't try this that way. Basically there are two options to do this depending how your data is collected and where it's created. In SCP side you can...
See more...
Hi It's like @livehybrid said. You cannot / shouldn't try this that way. Basically there are two options to do this depending how your data is collected and where it's created. In SCP side you can set Federated Search in your SCP and use it to access data from another SCP stack. See more https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/FederatedSearch/fsoptions. The second option is replicate data before you send it into SCP stack. E.g. you could set your own HFs where you can set this. r. Ismo
Can you tell more about what and how you have done this installation and what kind of distributed environment you have? Are the problematic node indexer, search head or something other node?