All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anythi... See more...
I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anything with a raw evtx file using Splunk's own mechanisms. Evtx is a proprietary windows file format with no officially available documentation. There are some reverse-engineered "specs" of the file format and some libraries/tools claiming support for it but you can never be 100% sure. You could try writing your own scripting/modular input using Python's module https://github.com/williballenthin/python-evtx
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      b... See more...
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      but this is not working as expected. Even trying to to use the custom trigger condition for one condition like search count => 250 is not working. What is the right way to do this? 
@Mario.Morelli can you provide any further insight for @Uma.Boppana on this?
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same... See more...
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same source as windows:security. However this is stated to be an API call so i am not sure if i apply the following stanza would work: [WinEventLog://Security] disabled = 0 index = test01   Some other details is that the logs are coming off a windows system that is isolated not connected to splunk. Splunk says you can't monitor .evtx files with a monitor stanza. The nas location is linux based so the logs would be dropped in a directory such as /Nas/Windows/Hostname. Any best practices to make this work?
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you ... See more...
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you can do that depends on how you're sending your data. 1) You can't specify multiple httpout stanzas in your forwarder. So if you want to send using s2s over http, tough luck. 2) I'm not sure but I seem to recall that you can't send both to tcpout and httpout (you might try to search this forum for details) 3) So we're left with two splunktcp outputs. It should work but remember that blocking one output blocks both outputs. 4) It also gets even more tricky to maintain if you want to selectively forward data from separate inputs - you have to remember which inputs to route to which outputs.
@dural_yyzClose but not quite. SHC uses raft algorithm. It will work with just two nodes but won't handle an outage of any node. True, it needs quorum to elect a leader but a quorum can be obtained... See more...
@dural_yyzClose but not quite. SHC uses raft algorithm. It will work with just two nodes but won't handle an outage of any node. True, it needs quorum to elect a leader but a quorum can be obtained in a 2-node cluster by having votes of both nodes. The problem starts when one node is down because with just one alive node you can never get a quorum. The same is also true for any even number of nodes - it needs (N/2)+1 votes for quorum so while an even-node cluster can survive (N/2)-1 nodes outage, it cannot function if you have an even split like half of the nodes in one datacenter, another half in another and a network outage. So odd-noded clusters are simply more cost-effective because adding one more node to make a cluster even-noded doesn't increase resilience. Additionally, with Splunk's SHC you can simply enforce a manually set captain, bypassing the normal raft election. @HarishSamudralaActually SHC consists of two "separate" clusters - one is your normal cluster formed of splunkd processes, another one is a "hidden" cluster formed of mongodb (kvstore) instances. Typically they share captaincy but it's not a must. In your case it seems that due to some communication problems the kvstore cluster can't get the nodes to communicate with each other so you can't get them both to form a quorum and decide which one is a captain.
Let me give this a try.  Thank you, Giuseppe
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large,... See more...
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large, but did not succeed.  I was wondering if the action.email.ttl is the one for this issue. Regards
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Recei... See more...
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.”   How do you identify the sourceHost ? Is it blocking incoming data or just warning?   Maybe this can help? index=_* host=myhf1 source="/OPT/splunk/var/log/splunk/metrics.log" tcp_Kprocessed="0.000"   Thanks for your help.
Thanks PeteAve! I'll try that and see what happens..... 
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding... See more...
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding:   [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1
A cluster requires 3 or more (odd counts only).  Quorum is obtained by having 50%+1 in sync.  Having only 2 nodes means there will never be quorum.
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that de... See more...
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that detector will show on that chart. To view alerts by application (service) or by severity, navigate to “Detectors & SLOs”. To filter by severity, click the box next to “Filter” and type “sf_severity:” and then choose “Critical” or whatever severity you want. To filter by application/service, click the “Any service/Endpoint” box next to “APM Filters” and select your application from the list.
Check the docs. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Viz/GenerateMap  
Thank you once again! I will review it on my side and let you know once I successfully complete it.
Is this bug this an ongoing issue???? We have upgraded to version 9.3.1 and receives Forwarder Ingestion Latency message. Stating "Root Cause(s) Indicator "ingestion_latency_gap_multipilier' exceeded... See more...
Is this bug this an ongoing issue???? We have upgraded to version 9.3.1 and receives Forwarder Ingestion Latency message. Stating "Root Cause(s) Indicator "ingestion_latency_gap_multipilier' exceeded configured value. The observed value is 1362418. Unhealthy instances: Indexer3. If this bug is still ongoing, can someone please "post the workaround"??  Thanks in advance!!  
@tscroggins Thanks for your answer and spending your time, even if we use makeresults , at every point we need to  mention that CSV data ?   
@ITWhisperer thanks for your time , we can replace L & R values with some number like L as 9 and R as 10 , then we can make it visualise.  
Thanks for clarifying.  Try this query. | rex mode=sed "s:<EventID>4702<\/EventID>|<TimeCreated SystemTime='[^']+'\/>|<Computer>[^<]+<\/Computer>|<Data Name='[^']+'>[^<]+<\/Data>::g"
Hello   How did you manage to eliminate duplicate fields?   Thanks!