All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return ... See more...
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return one in my searches. The field name is "triggeredComponents{}.triggeredFilters{}.trigger.value" and it returns several values of different types, for example: 1 5 out text / text text text hostname1 hostname2 445 I only wish to retrieve and view the "text / text text text" value, and then pop that into a |stats command. Please can someone offer some advise? Many thanks in advance!
Yes, it would be specific to HEC clients that check for the endpoint's availability with tcp connections but not sending data.  This would not happen with HF because HTTP traffic would come in HF th... See more...
Yes, it would be specific to HEC clients that check for the endpoint's availability with tcp connections but not sending data.  This would not happen with HF because HTTP traffic would come in HF then be sent via S2S protocol to Indexers, which wouldnt do the checks like that.  sorry for the answer from far in the future  
Hi @Gunnar, thank you for your hint, in the _configtracker index there isn't any information about the user who did a change, and anyway isn't so well documented: I should search to understand event... See more...
Hi @Gunnar, thank you for your hint, in the _configtracker index there isn't any information about the user who did a change, and anyway isn't so well documented: I should search to understand events by myself, I'm searching for a documentation. Thank you again. Ciao. Giuseppe
I am trying to run this query but Splunk is complaining that the eval is malformed. https://docs.splunk.com/Documentation/SCS/current/SearchReference/EvalCommandExamples I am not sure from the docs... See more...
I am trying to run this query but Splunk is complaining that the eval is malformed. https://docs.splunk.com/Documentation/SCS/current/SearchReference/EvalCommandExamples I am not sure from the docs how to try to fix this. 
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. n... See more...
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. no hiccups. Using a select statement in the API request, I can gather specific fields. The more fields I define, the more issues I run into in Splunk. Basically it feels like the app is rate limited. I would expect it to run to just under an hour. It usually fails after 10 minutes and starts again at the 2 hour (7200 seconds) configured interval time on the input page. If I define fewer fields in the select request, it runs for a little longer but still ends up failing and obviously I'm not getting the data I want. If I set the bare minimum one field it runs for the expected time, stops, and starts again at its configured interval. I'm hesitant to say what platform but it is cloud based. I'm running my app from an on-prem heavy forwarder indexing to Splunk Cloud. The input interval config is 2 hours. The python script iterates through requests due to paging limitations and delays between requests based on some math I did with the total number of assets and pages. Its about 3 seconds between requests. But again, my code works flawlessly running in VisCo. That target API isn't rate limiting me due to the scripted interval. At least, I have no reason to believe that it is. I've opened a ticket with Splunk but I wanted to see if anyone else has experience with the Splunk Add-on Builder and the custom python modules.
Hi, maybe the _configtracker index can help. It would have old and new values for all configuration changes including changes made to user roles. BR! Gunnar
Dont know what I was doing wrong yesterday, must have been the end of day eyes. I did figure it out but thanks for the answer! I accepted it and gave the Karma Have a great day!
If I am understanding your ask correctly, I think settings like this this would do it.  
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we... See more...
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we have a negative number, such as "-3" is there a way to make negative values red as well? Basically, anything that isn't zero should be red. Thanks for the help!
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role... See more...
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role or updates to an existing one, I found only action=edit_roles, but I can only know the associted user and not the changed role. Can anyone idicate an url to find Splunk audit information? Ciao. Giuseppe
max_match=0 Thats what I didn't include, I completely spaced that option. Thanks as always!
I like this answer, unfortunately I am going to have to update the props for this since as it is not being recognized as a valid xml object and therefore doesn't work. Thanks for the assistance, I gr... See more...
I like this answer, unfortunately I am going to have to update the props for this since as it is not being recognized as a valid xml object and therefore doesn't work. Thanks for the assistance, I greatly appreciate you help!
If the data passes through an HF then parsing (not pre-parsing) is done by the HF.  Adding index-time extractions to the Cloud indexers will do nothing so new extractions must be added to the HF. If... See more...
If the data passes through an HF then parsing (not pre-parsing) is done by the HF.  Adding index-time extractions to the Cloud indexers will do nothing so new extractions must be added to the HF. If the data does not pass through an HF then index-time field extraction is done by the indexers.
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some H... See more...
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some Heavy Forwarders. Task: perform some field extractions Details: addon for parsing are already installed and configured, so we have not to create new ones, we should simply enrich/expand existing ones. Those addons are installed on both cloud components and HFs. The point is this: due we already have some addon for parsing, we could simply edit their props.conf and transforms.conf files; of course, due we have addon installed on both cloud components and HFs, we have to perform changes on all of them.  For example, performing addon editing only on cloud components with GUI Field Extraction imply that new fields will be parsed at index time on them, because they will be not pre parsed by HFs. Plus, we know that we should create a copy of those file on local folder, to avoid editing the default one, etcetera, etcetera, etcetera.  But, at the same time, for our SOC we created a custom app used as container to store all customizations performed by/for them, following one of Splunk best practice. We store there reports, alerts, and so on: with "we store there" I mean that, when we create something and choose an app context, we set our custom SOC one. With this choice, we could simply perform a field extraction with GUI and assign as app context our custom one; of course, with this technique, custom regex are saved only on cloud components and not on the HFs. So, my wondering is: when we speak about field extraction, if we consider that pre parsing performed by HF is desired but NOT mandatory, what is the best choice? Maintain all field extractions on addon or split between OOT one and custom one, using our custom SOC app?
Usually you are configuring inputs into some own app. Inside this app there is inputs.conf where you have defined needed attributes like sourcetype, source and index where to send events. Have you a... See more...
Usually you are configuring inputs into some own app. Inside this app there is inputs.conf where you have defined needed attributes like sourcetype, source and index where to send events. Have you already read this https://docs.splunk.com/Documentation/Splunk/latest/Data/Getstartedwithgettingdatain ? If you are doing regularly indexing and adding new data sources you should participate to System Admin and also Data Admin courses to fully understand the way how this should manage with splunk.
Had the same error message to an adfs server with encryption and in my case this worked, dont know if it is correct. I added the encrypted private key to signAuthnRequest certificate, which  is th... See more...
Had the same error message to an adfs server with encryption and in my case this worked, dont know if it is correct. I added the encrypted private key to signAuthnRequest certificate, which  is this authentication.conf parameter: [saml] clientCert = cert_and_encrypted_private_key.pem The password of the encypted private key was configured to the parameter sslPassword of the same stanza  sslPasswort =  No this parameter could be set to true: signAuthnRequest = true and reloaded authentication to let the sslPasswort be hashed. Worked for me.
Hi @usej    - I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th... See more...
Hi @usej    - I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi team ,the  screen shot user(Read-only Access) able to access the delete option and edit the query also, our requirement user not able to  see delete option(hide),  please help me
Any Updates on this feature implementation.
Hello @VatsalJagani  , as you said no need to create index on all heavy forwarders, but let me ask something, when i received logs from same new log source, how to differentiate between different lo... See more...
Hello @VatsalJagani  , as you said no need to create index on all heavy forwarders, but let me ask something, when i received logs from same new log source, how to differentiate between different logs sources from the same log source?