All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok. First and foremost - do you have the addon installed? Secondly - did you look what's in the logs?
1. Please don't post screenshots - copy-paste your code and results into code blocks or preformatted paragraphs. It makes it easier for everyone and is searchable. 2. You're trying to do something t... See more...
1. Please don't post screenshots - copy-paste your code and results into code blocks or preformatted paragraphs. It makes it easier for everyone and is searchable. 2. You're trying to do something that is generally not supported - you can generate conditions for a search dynamically by means of subsearch, not whole searches. To some extent you could use the map command but it is relatively limited. 3. You can't use multisearch with non-streaming commands (like tstats).
Hello Splunk experts, I’m currently trying to create a search using a multisearch command where I need to dynamically apply regex patterns from a lookup file to the Web.url field in a tstats search.... See more...
Hello Splunk experts, I’m currently trying to create a search using a multisearch command where I need to dynamically apply regex patterns from a lookup file to the Web.url field in a tstats search. When I use my current approach, it directly adds the regex value as a literal search condition instead of applying it as a regex filter. For example, instead of dynamically matching URLs with the regex, it ends up as if it’s searching for the literal pattern. I have a lookup that contains fields like url_regex and other filter parameters, and I need to: 1. Dynamically use these regex patterns in the search, so that only URLs matching the regex from the lookup get processed further. 2. Ensure that the logic integrates correctly within a multisearch, where the base search is filtered dynamically based on these values from the lookup. I’ve shared some screenshots showing the query and the resulting issue, where the regex appears to be used incorrectly. How can I properly use these regex values to match URLs instead of treating them as literal strings? Search :-  | inputlookup my_lookup_file | search Justification="Lookup Instructions" | fields url_regex, description | fillnull value="*" | eval url_regex="Web.url=\"" . url_regex . "\"" | eval filter="source=\"my_sourcetype\" " . "filter_field=" . " \"" | eval search="| tstats `summariesonly` prestats=true count from datamodel=Web where sourcetype=\"" . filter . " by Web.url Web.user" | stats values(search) as search | eval search=multisearch [ mvjoin(search, " | ") ] . " | stats count by search" As highlighted in the yellow from above I wanted to have the regex matching string instead of the direct regex search from events? Also, lastly, once the multisearch query generates another search as output, how can I automatically execute that resulting search within my main query? Any guidance would be greatly appreciated!
Hi there, I'm using this API: https://splunk.github.io/splunk-add-on-for-amazon-web-services/APIreference/ Whenever I send a POST request to create metadata inputs that already exists, I get a 500 I... See more...
Hi there, I'm using this API: https://splunk.github.io/splunk-add-on-for-amazon-web-services/APIreference/ Whenever I send a POST request to create metadata inputs that already exists, I get a 500 Internal Server Error. Error: Unable to create metadata inputs: Unexpected HTTP status: 500 Internal Server Error (500) Expected behaviour: Do not return 500, return a payload that indicates that the resource already exists.
I've found the custom triggers to be unreliable at best.  What works better is to put the alert condition in the search query and have the alert trigger when the number of results is not zero.
I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anythi... See more...
I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anything with a raw evtx file using Splunk's own mechanisms. Evtx is a proprietary windows file format with no officially available documentation. There are some reverse-engineered "specs" of the file format and some libraries/tools claiming support for it but you can never be 100% sure. You could try writing your own scripting/modular input using Python's module https://github.com/williballenthin/python-evtx
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      b... See more...
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      but this is not working as expected. Even trying to to use the custom trigger condition for one condition like search count => 250 is not working. What is the right way to do this? 
@Mario.Morelli can you provide any further insight for @Uma.Boppana on this?
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same... See more...
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same source as windows:security. However this is stated to be an API call so i am not sure if i apply the following stanza would work: [WinEventLog://Security] disabled = 0 index = test01   Some other details is that the logs are coming off a windows system that is isolated not connected to splunk. Splunk says you can't monitor .evtx files with a monitor stanza. The nas location is linux based so the logs would be dropped in a directory such as /Nas/Windows/Hostname. Any best practices to make this work?
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you ... See more...
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you can do that depends on how you're sending your data. 1) You can't specify multiple httpout stanzas in your forwarder. So if you want to send using s2s over http, tough luck. 2) I'm not sure but I seem to recall that you can't send both to tcpout and httpout (you might try to search this forum for details) 3) So we're left with two splunktcp outputs. It should work but remember that blocking one output blocks both outputs. 4) It also gets even more tricky to maintain if you want to selectively forward data from separate inputs - you have to remember which inputs to route to which outputs.
@dural_yyzClose but not quite. SHC uses raft algorithm. It will work with just two nodes but won't handle an outage of any node. True, it needs quorum to elect a leader but a quorum can be obtained... See more...
@dural_yyzClose but not quite. SHC uses raft algorithm. It will work with just two nodes but won't handle an outage of any node. True, it needs quorum to elect a leader but a quorum can be obtained in a 2-node cluster by having votes of both nodes. The problem starts when one node is down because with just one alive node you can never get a quorum. The same is also true for any even number of nodes - it needs (N/2)+1 votes for quorum so while an even-node cluster can survive (N/2)-1 nodes outage, it cannot function if you have an even split like half of the nodes in one datacenter, another half in another and a network outage. So odd-noded clusters are simply more cost-effective because adding one more node to make a cluster even-noded doesn't increase resilience. Additionally, with Splunk's SHC you can simply enforce a manually set captain, bypassing the normal raft election. @HarishSamudralaActually SHC consists of two "separate" clusters - one is your normal cluster formed of splunkd processes, another one is a "hidden" cluster formed of mongodb (kvstore) instances. Typically they share captaincy but it's not a must. In your case it seems that due to some communication problems the kvstore cluster can't get the nodes to communicate with each other so you can't get them both to form a quorum and decide which one is a captain.
Let me give this a try.  Thank you, Giuseppe
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large,... See more...
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large, but did not succeed.  I was wondering if the action.email.ttl is the one for this issue. Regards
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Recei... See more...
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.”   How do you identify the sourceHost ? Is it blocking incoming data or just warning?   Maybe this can help? index=_* host=myhf1 source="/OPT/splunk/var/log/splunk/metrics.log" tcp_Kprocessed="0.000"   Thanks for your help.
Thanks PeteAve! I'll try that and see what happens..... 
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding... See more...
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding:   [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1
A cluster requires 3 or more (odd counts only).  Quorum is obtained by having 50%+1 in sync.  Having only 2 nodes means there will never be quorum.
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that de... See more...
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that detector will show on that chart. To view alerts by application (service) or by severity, navigate to “Detectors & SLOs”. To filter by severity, click the box next to “Filter” and type “sf_severity:” and then choose “Critical” or whatever severity you want. To filter by application/service, click the “Any service/Endpoint” box next to “APM Filters” and select your application from the list.
Check the docs. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Viz/GenerateMap  
Thank you once again! I will review it on my side and let you know once I successfully complete it.