All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @PickleRick for your reply.  Hello Everyone.. i am pretty new for App dev, so pardon me, let me explain you step by step: 1) the external_cmd ---- ucd_category_lookup.py script is present.... See more...
Thanks @PickleRick for your reply.  Hello Everyone.. i am pretty new for App dev, so pardon me, let me explain you step by step: 1) the external_cmd ---- ucd_category_lookup.py script is present...screenshot: 2) the lookup definition was not created previously, so just i created one. given permissions to search app, selected the ucd_category_lookup.py as the external command, added and verified the fields properly 3) lookup definition done, restart splunk done, but still same error.  4) i thought automatic lookups is what needed, created one: 5) restart Splunk done, still the same error.   
PickleRick-san gcusello-san Thank you for response. As you pointed out, KVstore did not seem to be working properly. I reinstalled it with Fips-mode turned off and it worked fine. Is it impossib... See more...
PickleRick-san gcusello-san Thank you for response. As you pointed out, KVstore did not seem to be working properly. I reinstalled it with Fips-mode turned off and it worked fine. Is it impossible to install SplunkES on a Splunk Server running in Fips-mode?
Hi Splunk Community,  Is there a way to capture the host of a UF as its passing through a HF to add the host right before the log messaging being processed. I have tried a few things with no luck ... See more...
Hi Splunk Community,  Is there a way to capture the host of a UF as its passing through a HF to add the host right before the log messaging being processed. I have tried a few things with no luck but asking here while i dig through the documentations. Thanks!
Also unless you absolutely have to, avoid using leading wildcards in the search term host="*..."
It looks like this mystery has not been solved within those 11 years though! If your guess is correct, I guess they hadn't implemented Pull Requests yet whenever that code was added... I went lo... See more...
It looks like this mystery has not been solved within those 11 years though! If your guess is correct, I guess they hadn't implemented Pull Requests yet whenever that code was added... I went looking for this answer, though, because in Splunk Cloud the _audit index is set to this default value for retention, and you are unable to change the retention setting for internal indexes in Splunk Cloud, so if you have a requirement to retain audit logs for 6 years, you're technically short by 6 days!
   !!! NEW !!! Upcoming Cloud Event Alert !!! Register Today !!!Accelerate Digital Resilience with Splunk’s AI-Driven Cloud Platform | Thursday's, January 30 & February 6 | 10AM PT / 1PM ET Join us... See more...
   !!! NEW !!! Upcoming Cloud Event Alert !!! Register Today !!!Accelerate Digital Resilience with Splunk’s AI-Driven Cloud Platform | Thursday's, January 30 & February 6 | 10AM PT / 1PM ET Join us to hear top experts share how migrating to Splunk Cloud enhances data security and visibility, while empowering your organization to boost digital resilience to stay ahead in the AI era. Don’t miss this opportunity to learn from the best!
thank you for the response and I’ve updated the query now.   
I need to forward data from a heavy forwarder to two different indexer clusters. One of the clusters needs to have a field removed. If I use sedcmd in props.conf on the HF it removes it for both and ... See more...
I need to forward data from a heavy forwarder to two different indexer clusters. One of the clusters needs to have a field removed. If I use sedcmd in props.conf on the HF it removes it for both and putting sedcmd in props.conf on one of the indexers doesn't work (it does work if i bypass the HF).  Is there a way to do this? Edit: I was thinking of using an intermediate forwarder so heavy forwarder -> another heavy forwarder -> indexer cluster but the intermediate heavy forwarder props.conf does not work.
Sorry I should also mention that this also happens with the roles endpoint which is where I got that message.
I'm just an API consumer, so not sure if I have the addon installed (do you have more specific instructions for me to pass on to the Splunk admin?). As far as logs here is what I see: <msg type="ER... See more...
I'm just an API consumer, so not sure if I have the addon installed (do you have more specific instructions for me to pass on to the Splunk admin?). As far as logs here is what I see: <msg type="ERROR"> In handler 'splunk_ta_aws_iam_roles': bad character (52) in reply size</msg> </messages>
Ok. First and foremost - do you have the addon installed? Secondly - did you look what's in the logs?
1. Please don't post screenshots - copy-paste your code and results into code blocks or preformatted paragraphs. It makes it easier for everyone and is searchable. 2. You're trying to do something t... See more...
1. Please don't post screenshots - copy-paste your code and results into code blocks or preformatted paragraphs. It makes it easier for everyone and is searchable. 2. You're trying to do something that is generally not supported - you can generate conditions for a search dynamically by means of subsearch, not whole searches. To some extent you could use the map command but it is relatively limited. 3. You can't use multisearch with non-streaming commands (like tstats).
Hello Splunk experts, I’m currently trying to create a search using a multisearch command where I need to dynamically apply regex patterns from a lookup file to the Web.url field in a tstats search.... See more...
Hello Splunk experts, I’m currently trying to create a search using a multisearch command where I need to dynamically apply regex patterns from a lookup file to the Web.url field in a tstats search. When I use my current approach, it directly adds the regex value as a literal search condition instead of applying it as a regex filter. For example, instead of dynamically matching URLs with the regex, it ends up as if it’s searching for the literal pattern. I have a lookup that contains fields like url_regex and other filter parameters, and I need to: 1. Dynamically use these regex patterns in the search, so that only URLs matching the regex from the lookup get processed further. 2. Ensure that the logic integrates correctly within a multisearch, where the base search is filtered dynamically based on these values from the lookup. I’ve shared some screenshots showing the query and the resulting issue, where the regex appears to be used incorrectly. How can I properly use these regex values to match URLs instead of treating them as literal strings? Search :-  | inputlookup my_lookup_file | search Justification="Lookup Instructions" | fields url_regex, description | fillnull value="*" | eval url_regex="Web.url=\"" . url_regex . "\"" | eval filter="source=\"my_sourcetype\" " . "filter_field=" . " \"" | eval search="| tstats `summariesonly` prestats=true count from datamodel=Web where sourcetype=\"" . filter . " by Web.url Web.user" | stats values(search) as search | eval search=multisearch [ mvjoin(search, " | ") ] . " | stats count by search" As highlighted in the yellow from above I wanted to have the regex matching string instead of the direct regex search from events? Also, lastly, once the multisearch query generates another search as output, how can I automatically execute that resulting search within my main query? Any guidance would be greatly appreciated!
Hi there, I'm using this API: https://splunk.github.io/splunk-add-on-for-amazon-web-services/APIreference/ Whenever I send a POST request to create metadata inputs that already exists, I get a 500 I... See more...
Hi there, I'm using this API: https://splunk.github.io/splunk-add-on-for-amazon-web-services/APIreference/ Whenever I send a POST request to create metadata inputs that already exists, I get a 500 Internal Server Error. Error: Unable to create metadata inputs: Unexpected HTTP status: 500 Internal Server Error (500) Expected behaviour: Do not return 500, return a payload that indicates that the resource already exists.
I've found the custom triggers to be unreliable at best.  What works better is to put the alert condition in the search query and have the alert trigger when the number of results is not zero.
I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anythi... See more...
I seem to recall that you could ingest an evtx file by uploading it via web interface to a windows instance of Splunk server but if it is indeed the case, that's pretty much the only way to do anything with a raw evtx file using Splunk's own mechanisms. Evtx is a proprietary windows file format with no officially available documentation. There are some reverse-engineered "specs" of the file format and some libraries/tools claiming support for it but you can never be 100% sure. You could try writing your own scripting/modular input using Python's module https://github.com/williballenthin/python-evtx
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      b... See more...
I want my alert to trigger when the result count is between 250 and 500, trying to use the custom trigger condition in the alert setup with     search count => 250 AND search count <=500      but this is not working as expected. Even trying to to use the custom trigger condition for one condition like search count => 250 is not working. What is the right way to do this? 
@Mario.Morelli can you provide any further insight for @Uma.Boppana on this?
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same... See more...
The goal here is that windows logs that are moved off a system can be added to a NAS location that i can mount to the splunk instance. With this then i can ingest logs are normal maintaining the same source as windows:security. However this is stated to be an API call so i am not sure if i apply the following stanza would work: [WinEventLog://Security] disabled = 0 index = test01   Some other details is that the logs are coming off a windows system that is isolated not connected to splunk. Splunk says you can't monitor .evtx files with a monitor stanza. The nas location is linux based so the logs would be dropped in a directory such as /Nas/Windows/Hostname. Any best practices to make this work?
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you ... See more...
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you... There are other possible issues though and whether you can do that depends on how you're sending your data. 1) You can't specify multiple httpout stanzas in your forwarder. So if you want to send using s2s over http, tough luck. 2) I'm not sure but I seem to recall that you can't send both to tcpout and httpout (you might try to search this forum for details) 3) So we're left with two splunktcp outputs. It should work but remember that blocking one output blocks both outputs. 4) It also gets even more tricky to maintain if you want to selectively forward data from separate inputs - you have to remember which inputs to route to which outputs.