All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As I explained before, kv_mode on the search head is all thats needed to auto parse well formatted json.  see the spec file for KV_MODE here and then for INDEXED_EXTRACTIONS  here noting it explains... See more...
As I explained before, kv_mode on the search head is all thats needed to auto parse well formatted json.  see the spec file for KV_MODE here and then for INDEXED_EXTRACTIONS  here noting it explains why you should NOT set both.  They are two means to a similar outcome, but indexed_extractions actually puts the value into TSIDX files, where search time it does not. You should always start with search time and only move fields that absolutely need it to index time. Please read this and consider taking a few of the Free Splunk EDU classes to learn more 
According to your screenshot, the inputs are "DISABLED". The checkmark follows typical Splunk inputs as "disabled == checked". Uncheck those inputs, and you should see data flow. Thanks!
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writin... See more...
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writing queries to pull this info. Usually in the audit index, trying to find what KO's/indexes/searches/etc are being used, whats not being used and just overall monitoring. any advice or tips on this?
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected ... See more...
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected value... Please help me to get this result? <input type="multiselect" token="app_name"> <label>Application Name</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>app_name</fieldForLabel> <fieldForValue>app_name</fieldForValue> <search base="base_search"> <query> |stats count by app_name </query> </search> <valuePrefix>app_name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> </input>
Here you go @Jeewan     Will
@mattymo please remove it from the config and lets focus on getting your data massaged and auto parsing at search time.  ---> How it will auto parse the data at search time?
Splunk Observability endpoint grouping settings adjustment http.route coming as A/* - grouped endpoint but url.path coming as A/B/C/D - full endpoint how to fix this issue ? Anyone can help?
If the search without the filter returns all the data, then the filter is removing too much. When running in the Python are the data types being changed? In Splunk the httpcode might be an integer b... See more...
If the search without the filter returns all the data, then the filter is removing too much. When running in the Python are the data types being changed? In Splunk the httpcode might be an integer but Python sees it as a string? Can you validate the data from the Python to confirm the value of httpcode is what you're expecting?
I believe you are mixing scenarios here, leading to your confusion. Allow me to try and unwind this a bit.  Duplicate events are likely unrelated to your json extractions. Let's separate the two i... See more...
I believe you are mixing scenarios here, leading to your confusion. Allow me to try and unwind this a bit.  Duplicate events are likely unrelated to your json extractions. Let's separate the two items: 1. Indexed Extractions - Lets start with your config. As I mentioned in the previous answers post, you DO NOT need INDEXED_EXTRACTIONS=JSON for this use case. At least not to start. Furthermore, if you only put that setting on the Indexers, as shown above, it does nothing. This setting is meant for properly formatted JSON events and must be set on the forwarder and send to indexers already parsed - Please read this doc explaining the feature Please take INDEXED_EXTRACTIONS out of the equation moving forward ok? It is causing unnecessary confusion here because your original data IS NOT JSON. You do not need this setting to auto parse JSON at search time, which should always be the first step when onboarding data. I almost ALWAYS try and avoid INDEXED_EXTRACTIONS for reasons that are beyond the scope of getting you sorted.  please remove it from the config and lets focus on getting your data massaged and auto parsing at search time.  2. Dupe Events - Duplicate events can happen for a few reasons, but none of them are generally related to json parsing. Duplicate events can be confirmed by comparing the _raw event to confirm they are complete dupes. See this helpful answer to see how you can validate whether they are truly duplicates, then we can go from there on why you have duplicate events. This should/will be completely unrelated to your json extractions, and is more likely do to your inputs configuration, where your collector is reading the same file twice, or truly is duplicated in your source files.    I don't want you to continue twisting in the wind on this data onboarding, it's been ongoing for quite sometime. Do you know who your Splunk account team is? Your Sales Engineer should be able to help you get unstuck. Please contact them as we have various folks who can sit with you and show you the deal. If you don't know who they are, DM me and I can find them for you. No need to continue to keep banging your head on the desk when we have plenty of trained experts that can help you navigate this learning path. 
@dataisbeautiful  do you have any solution for this issue?
From my understanding (and I admit I might be wrong), isn't a csr a certificate signing request i.e. a request to a CA to sign a certificate, not an actual certificate? Perhaps this might be why you ... See more...
From my understanding (and I admit I might be wrong), isn't a csr a certificate signing request i.e. a request to a CA to sign a certificate, not an actual certificate? Perhaps this might be why you are having difficulties?
Hi @livehybrid can you please share an screenshot with this url added in allowlist "https://yourwebhookurl.com/v1.
@splunklearner Please bear in mind that most of us here are volunteers who choose where and when to spend their (usually non-$) time on helping people with their questions. Tagging specific people ca... See more...
@splunklearner Please bear in mind that most of us here are volunteers who choose where and when to spend their (usually non-$) time on helping people with their questions. Tagging specific people can be counter-productive from two aspects - 1) volunteers like to be able to choose what to spend their time on and making demands on them when you have no right to do so can put you at the bottom of their priority queue - 2) not tagging someone might make them think you do not value their opinion and therefore they are less inclined to help. Something for you to consider in future!
Thanks for the quick help and it did save my day!
Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root ... See more...
Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root CA. It can establish a connection and retrieve logs as expected. However, when using the application created, I am seeing the error message. I’ve double-checked the values, and everything seems to be the same. In our testing environment, it works, but the only difference I noticed is that the root CA certificate is in .csr format. Should I convert it to .pem, as we did in the testing environment? -0700 ERROR ExecProcessor - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/TA_case/bin/case.py" HTTPSConnectionPool(host='<HiddenForSensitivityPurpose>', port=443): Max retries exceeded with url: <HiddenForSensitivityPurpose>caseType=Service+Case&fromData=2025-02-06+17%3A23&endDate=2025-02-06+21%3A23 (Caused by SSLError(SSLCertverificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))  
@mattymo @gcusello @PickleRick can you all please answer this question - https://community.splunk.com/t5/Getting-Data-In/Duplicate-values-because-of-json-values/m-p/711126#M117476
Thank you very much. I followed your method and resolved the issue.
Thanks so much @livehybrid 
Hi @Raja1 Do you now get a different error? What do you see in Splunkd.log and mongod.log, or in the CLI output? Thanks
Hmm that is odd. So you are seeing  both Medium and High being created? Please can you double check that there isnt a search running with the same rule name that could be creating the Medium severit... See more...
Hmm that is odd. So you are seeing  both Medium and High being created? Please can you double check that there isnt a search running with the same rule name that could be creating the Medium severity alerts? In the past when I have cloned ESCU searches for example I have accidently left the original searches enabled and end up creating notables from them too!