All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you brother! I'm checking it out as we speak  
Thanks! im a bit new to the splunk community forum. But if i accept this as the solution, will it prevent other users from still inputting advice?
Hi @Karthikeya  To achieve this you are probably best using some Javascript, have a look at these two links as I think they contain working examples for you to use: https://community.splunk.com/t5/... See more...
Hi @Karthikeya  To achieve this you are probably best using some Javascript, have a look at these two links as I think they contain working examples for you to use: https://community.splunk.com/t5/Dashboards-Visualizations/Remove-quot-All-quot-from-Multiselect-Input-in-Dashboard/m-p/301375 https://community.splunk.com/t5/Dashboards-Visualizations/How-to-get-rid-of-default-quot-All-quot-from-Multiselect-filter/m-p/638796/highlight/true     Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @uagraw01  Are there any mongo/kvstore logs in $SPLUNK_HOME/var/log/splunk/splunkd.log or mongod.log with any error/critical/fatal or maybe even warning messages? What process did you take to re... See more...
Hi @uagraw01  Are there any mongo/kvstore logs in $SPLUNK_HOME/var/log/splunk/splunkd.log or mongod.log with any error/critical/fatal or maybe even warning messages? What process did you take to rebuild the mongoDB folder? Thanks, hopefully we can help get to the bottom of it!
At a high level, the following searches can be start points for the information you're looking for. 1. Audit index queries: - Use "index=_audit" to explore usage data Look for sourcetypes like "a... See more...
At a high level, the following searches can be start points for the information you're looking for. 1. Audit index queries: - Use "index=_audit" to explore usage data Look for sourcetypes like "audittrail" and "searches" 2. Knowledge Object (KO) usage: Check for saved searches, reports, and dashboards usage Use "index=_audit action=search search_id=*" to find executed searches Check "index=_internal sourcetype=splunkd_conf" for configuration changes 3. Index usage: Analyze "index=_internal sourcetype=splunkd_access" for index access patterns Use "index=_introspection sourcetype=splunk_resource_usage" for resource usage 4. Search performance: Examine "index=_audit action=search" for slow searches Look at "index=_internal sourcetype=scheduler" for scheduled search performance 5. Data intake: Review "index=_internal sourcetype=splunkd" for forwarder and receiver logs You could also look at the Alerts for Splunk Admins app on Splunkbase which has a good bunch of searches baked in (https://splunkbase.splunk.com/app/3796) Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Dear Splunkers!! Following the migration of our Splunk server from version 8.1.1 to 9.1.1, we have encountered persistent KV Store failures. The service terminates unexpectedly multiple times post-m... See more...
Dear Splunkers!! Following the migration of our Splunk server from version 8.1.1 to 9.1.1, we have encountered persistent KV Store failures. The service terminates unexpectedly multiple times post-migration. Issue Summary: As a workaround, I renewed the server.pem certificate and rebuilt the MongoDB folder. This temporarily resolves the issue, and KV Store starts working as expected. However, the corruption reoccurs the following day, requiring the same manual interventions. Request for Permanent Resolution: I seek a permanent fix to prevent KV Store from repeatedly failing. Kindly provide insights into the root cause and recommend a robust solution to ensure KV Store stability post-migration. Looking forward to your expert guidance.
Instead of using event handlers to set tokens, I recommend using a base search and subsearches for a more robust solution. Here's an approach you could consider: 1. Create a base search that calcu... See more...
Instead of using event handlers to set tokens, I recommend using a base search and subsearches for a more robust solution. Here's an approach you could consider: 1. Create a base search that calculates the counts for today, yesterday, and last week in one go. 2. Use subsearches in your dashboard panels to reference the results from this base search. Here's an example of how you might structure the base search (I havent tested this, but hopefully you can apply to your environment): | makeresults | eval _time=now() | map search="search [your original search here] earliest=-1d | stats count by Site | eval period=\"today\"" | append [| map search="search [your original search here] earliest=-2d latest=-1d | stats count by Site | eval period=\"yesterday\""] | append [| map search="search [your original search here] earliest=-8d latest=-7d | stats count by Site | eval period=\"lastweek\""] | stats latest(count) as count by Site, period | transpose column_name=period header_field=Site   Then, in your dashboard panels, you can use subsearches to reference this base search and calculate the deltas: | eval delta_yesterday = today - yesterday | eval delta_lastweek = today - lastweek This approach eliminates the need for complex token manipulation and provides a more straightforward way to calculate and display the deltas you need.
@becksyboy  For more comprehensive CIM mapping coverage, you might need to perform manual CIM mapping. The Splunk Add-on Builder can help you map fields from your data events to the fields in any da... See more...
@becksyboy  For more comprehensive CIM mapping coverage, you might need to perform manual CIM mapping. The Splunk Add-on Builder can help you map fields from your data events to the fields in any data model, including CIM data models. Check this https://community.splunk.com/t5/Splunk-Enterprise/Azure-Firewall-Logs-Issue/m-p/703787 
Are you using a search to populate the multiselect? If so you should adjust this search to exclude the names you do not want to appear in the list, this should stop users then selecting them.  If yo... See more...
Are you using a search to populate the multiselect? If so you should adjust this search to exclude the names you do not want to appear in the list, this should stop users then selecting them.  If you have an "All / *" option in your select as a manual entry then you should also exclude these users in the SPL Search that populates your panels too so that when the All / * option is selected that these are still excluded. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
How do I exclude 6 names from my dashboards? They come up in all my multiselects and several panels 
Hi all, I am trying to figure out a way to, based on the data available in the table below, add a column to the Yesterday and Last Week's tables with the delta between the values: The que... See more...
Hi all, I am trying to figure out a way to, based on the data available in the table below, add a column to the Yesterday and Last Week's tables with the delta between the values: The queries in the panels are simple stats counts grouped by Site (BDC or SOC) with the addtotals command specified. To display the values for yesterday and last week I am using time shifts within the query. As an example, this is the "yesterday's" timeshift: [| makeresults | addinfo | eval earliest=info_min_time - 86400 | eval latest=info_max_time - 86400 | table earliest latest]  I need to add a column in both the Yesterday and LastWeek's tables that shows the volume's delta in comparison with Today. I am trying to pass the results of the first query as a token so I can reference it in the other queries and use eval to calculate the delta, but I can't make it work. This is the line I have added to the JSON to pass the result as a token: "eventHandlers": [ { "type": "action.setToken", "options": { "tokens": { "todayVolume": "$result.Count$" } } } ],  When I try this approach, Splunk complains about the token "$result.Count$" hasn't been set. I was also exploring the idea of using chain searches, but I think Dynamic Tokens are a cleaner more efficient solution. I'd appreciate if I could some assistance with figuring this out. Thank you in advance.
Hi, I wrote this to check if its working or not .. props.conf [source::http:my LogStash] sourcetype = httpevent TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override... See more...
Hi, I wrote this to check if its working or not .. props.conf [source::http:my LogStash] sourcetype = httpevent TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host := json_extract(_raw, "host.name") [securelog_override_raw] INGEST_EVAL = message := json_extract(_raw, "message") now which query i need to run in search reporting i wrote  index="" sourcetype=""  | table host,message,_raw ( with this i am only seeing the data which i was able to do with the spath in search query) but i did not got any extracted data its the same values iam getting what can be the reason or how to check if my props and transforms is working correctly. I see my results same non extracted can you provide some query to check or some information which i can refer?
Hi, We noticed for the Splunk Add-on for Microsoft Cloud Services that CIM mapping is not enabled for all the Sourcetypes. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Source... See more...
Hi, We noticed for the Splunk Add-on for Microsoft Cloud Services that CIM mapping is not enabled for all the Sourcetypes. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Sourcetypes/ In particular for the mscs:kql sourcetype we are ingesting Azure Network logs via sourcetype="mscs:kql" Type=AZFWNetworkRule. I would have expected this Add On to have Network Datamodel CIM mapping included without having to do this ourselves (which we can if required).  Is this the best Add On to use (or is there a better option) if you want more CIM mapping coverage by default or have you had to do manual CIM mapping when using this TA? thanks
This .conf24 presentation should have some useful information. GitHub - TheWoodRanger/presentation-conf_24_audittrail_native_telemetry
@livehybrid  Thanks so much really appreciate it.  any idea about why this option is not available on Splunk Cloud Trial Version ?
Know who is logging into what Splunk systems You know what systems searches are being performed on. What searches are being performed, What commands are being used in a search (think spl key words... See more...
Know who is logging into what Splunk systems You know what systems searches are being performed on. What searches are being performed, What commands are being used in a search (think spl key words such as search, lookup, join, append, mvcount, etc) What sourcetypes, lookups, eventtypes, etc are being searched What dashboards are being visited etc
As I explained before, kv_mode on the search head is all thats needed to auto parse well formatted json.  see the spec file for KV_MODE here and then for INDEXED_EXTRACTIONS  here noting it explains... See more...
As I explained before, kv_mode on the search head is all thats needed to auto parse well formatted json.  see the spec file for KV_MODE here and then for INDEXED_EXTRACTIONS  here noting it explains why you should NOT set both.  They are two means to a similar outcome, but indexed_extractions actually puts the value into TSIDX files, where search time it does not. You should always start with search time and only move fields that absolutely need it to index time. Please read this and consider taking a few of the Free Splunk EDU classes to learn more 
According to your screenshot, the inputs are "DISABLED". The checkmark follows typical Splunk inputs as "disabled == checked". Uncheck those inputs, and you should see data flow. Thanks!
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writin... See more...
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writing queries to pull this info. Usually in the audit index, trying to find what KO's/indexes/searches/etc are being used, whats not being used and just overall monitoring. any advice or tips on this?
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected ... See more...
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected value... Please help me to get this result? <input type="multiselect" token="app_name"> <label>Application Name</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>app_name</fieldForLabel> <fieldForValue>app_name</fieldForValue> <search base="base_search"> <query> |stats count by app_name </query> </search> <valuePrefix>app_name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> </input>