All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @P_vandereerden and it worked as the way I wanted.
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking o... See more...
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking on Submit button. Will this be possible, please can someone help?
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only ... See more...
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only March data and loss of July results. The impact is significant now, and we hope you can help us check, or if we can implement it in a different way. I use SPL as follows: index=edws sourcetype=edwcsv status="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | eval earliest_time=strftime(earliest_time, "%F 00:00:00") | eval latest_time=strftime(latest_time, "%F 00:00:00") | eval earliest_time=strptime(earliest_time, "%F %T") | eval earliest_time=round(earliest_time) | eval latest_time=strptime(latest_time, "%F %T") | eval latest_time=round(latest_time) | addinfo | table info_min_time info_max_time earliest_time latest_time | eval searchEarliestTime=if(info_min_time == "0.000",earliest_time,info_min_time ) | eval searchLatestTime=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime, searchLatestTime, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | eval alert_date=relative_time(end,"+1d") | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date, "%F") | fields start a end b c | map search="search earliest=\"$start$\" latest=\"$end$\" index=edws sourcetype=edwcsv status="是" | bin _time span=1d | stats dc(_time) as "访问敏感账户次数" by date day name department number | eval a=$a$ | eval b=$b$ | eval c=$c$ | stats sum(访问敏感账户次数) as count,values(day) as "查询日期" by a b c name number department " maxsearches=500000 | where count > 2
Your help was very much appreciated.
@jiaminyun If you find this solution satisfactory, please proceed to accept it.
@jiaminyun  Splunk prioritizes evaluating the total data size in the index against the `maxTotalDataSizeMB` parameter. If the total size exceeds the defined limit, Splunk will begin deleting the old... See more...
@jiaminyun  Splunk prioritizes evaluating the total data size in the index against the `maxTotalDataSizeMB` parameter. If the total size exceeds the defined limit, Splunk will begin deleting the oldest buckets, regardless of whether they satisfy the retention period defined by `frozenTimePeriodInSecs`. Conversely, if the data size remains within the specified limit, the system will then assess buckets based on the `frozenTimePeriodInSecs` parameter to archive or delete those exceeding the time threshold. To ensure consistent data retention for a specific duration (e.g., 200 days), it is essential to configure `maxTotalDataSizeMB` to accommodate the anticipated volume of data for the desired retention period.
@jiaminyun   The priority between frozenTimePeriodInSecs and maxTotalDataSizeMB can be understood as follows: maxTotalDataSizeMB Takes Precedence: If the index size exceeds maxTotalDataSizeMB befo... See more...
@jiaminyun   The priority between frozenTimePeriodInSecs and maxTotalDataSizeMB can be understood as follows: maxTotalDataSizeMB Takes Precedence: If the index size exceeds maxTotalDataSizeMB before reaching the time set in frozenTimePeriodInSecs, the data will be rolled to frozen state based on the size limit. http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Setaretirementandarchivingpolicy
@Cccvvveee0235  Great. 
Hello sir! Thank you! Solved this question. It was bitdefender antivirus blocking Splunk, when I am trying to authenticate.
 Hello Team, We are seeing some weirdness when are sending logs to splunk enterprise on-prem. Prior we used to use splunk OTEL Java agent V1 and things were fine, Once the migration to Splunk OTEL J... See more...
 Hello Team, We are seeing some weirdness when are sending logs to splunk enterprise on-prem. Prior we used to use splunk OTEL Java agent V1 and things were fine, Once the migration to Splunk OTEL Java agent V2 was done, we started seeing logs being duplicated like below The below is what started showing up Can you please help how to we stop the kubernetes source? The actual source which we used to observe is as below Please let me know if you need any more information. I would really appreciate any insights into this and arresting the logs from source kubernetes. Thanks!
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for t... See more...
Hello, may I ask two questions 1) We are currently experiencing a 200 day archive configuration for the index, but it has not taken effect. Could you please advise on the triggering conditions for the frozenTimePeriodInsecs parameter. 2) Which is higher in priority between the frozenTimePeriodInsecs parameter of the index and maxTotalDataSizeMB?
You know when you're just on autopilot when you type things out, you know that what you've typed is wrong but you still type it anyway..? Yeah... corrected that now, but still the same issue Than... See more...
You know when you're just on autopilot when you type things out, you know that what you've typed is wrong but you still type it anyway..? Yeah... corrected that now, but still the same issue Thank you for pointing that out though, PickleRick, that would've caused another issue down the line. 
OK. Firstly, this is something you should use job inspector for and possibly job logs. You might also want to just run the subsearch on its own and see what results it yields. Having said that, some... See more...
OK. Firstly, this is something you should use job inspector for and possibly job logs. You might also want to just run the subsearch on its own and see what results it yields. Having said that, some remarks: 1. Unless your "index IN (...)" contains some wildcards, the exclusion following it doesn't make sense. 2. "NewProcessName IN (*)" is a very ineffective approach. 3. Your subsearch restults will get rendered as set of OR-ed composite AND conditions. Each of them will have a specific _time value. So while your main search contains earliest/latest, your subsearch will either return empty conditions set or will tell Splunk to search only at specific points in time at full seconds. If your data has sub-second time resolution, you're likely to not match much. 4. Your subsearch will actually not yield any results at all since you're doing tstats count over some fields but then searching through tstats results using a field which is not included in those results. So either you've trimmed your search too much while anonymizing it or it's simply broken.
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT in... See more...
Hey,  lately i was working on an SPL and wondered why this aint working. This is simplified     index IN(anonymized_index_1, anonymized_index_2, anonymized_index_3, anonymized_index_4) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h EventCode=xxxx sourcetype="AnonymizedSourceType" NewProcessName IN (*) [| tstats count where index IN(anonymized_index_3, anonymized_index_1, anonymized_index_4, anonymized_index_2) NOT index IN (excluded_index_1) earliest=-1h@h latest=@h idx_EventCode=xxxx sourcetype="AnonymizedSourceType" idx_NewProcessName IN(*) by idx_Field1 _time idx_Field2 host index span=1s | search anonym_ref!="n/a" OR (idx_NewProcessName IN (*placeholder_1*, *placeholder_2*) AND (placeholder_field_1=* OR placeholder_field_2=*)) ]   When I run this SPL, I’ve noticed inconsistent behavior regarding the earliest and latest values. Sometimes the search respects the defined earliest and latest values, but at other times, it completely ignores them and instead uses the time range from the UI time picker. After experimenting, I observed that if I modify the search command to combine the conditions into one single condition instead of having two separate conditions, it seems to work as expected. However, I find this behavior quite strange and inconsistent. I would like to retain the current structure of the search command (with two conditions) but ensure it always respects the defined earliest and latest values. If anyone can identify why this issue occurs or provide suggestions to resolve it while maintaining the current structure, I’d greatly appreciate your input.  
This is an example from the Splunk dashboard examples app - (Custom Table Row Expansion) - which shows lazy search string evaluation. https://splunkbase.splunk.com/app/1603  requirejs([ '../app... See more...
This is an example from the Splunk dashboard examples app - (Custom Table Row Expansion) - which shows lazy search string evaluation. https://splunkbase.splunk.com/app/1603  requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, TableView, ChartView, SearchManager, mvc) { var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function() { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._chartView = new ChartView({ 'managerid': 'details-search-manager', 'charting.legend.placement': 'none' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var sourcetypeCell = _(rowData.cells).find(function (cell) { return cell.field === 'sourcetype'; }); //update the search with the sourcetype that we are interested in this._searchManager.set({ search: 'index=_internal sourcetype=' + sourcetypeCell.value + ' | timechart count' }); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container $container.append(this._chartView.render().el); } }); var tableElement = mvc.Components.getInstance('expand_with_events'); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });  
Even if everything else is OK, you're using wrong endpoint. It's "collector", not "collection".
Looks good! Just be mindful of the difference between the plus (+) operator and the dot (.) operator. Plus concatenates strings and adds numbers, but dot concatenates both strings and numbers as stri... See more...
Looks good! Just be mindful of the difference between the plus (+) operator and the dot (.) operator. Plus concatenates strings and adds numbers, but dot concatenates both strings and numbers as strings. If you're unsure of the order of operations or the variable value in other contexts, you can wrap the variables in the tostring() function. It's interesting that Splunk also assumed a leading space would work in the SC4S transforms: [metadata_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = t="$1$0 DEST_KEY = _raw @PickleRick's suggestion to escape the space with i.e. a blackslash just evaluates to a literal backslash followed by a space.
I don't have much experience with ingest actions but my understanding is that they indeed can be called later in the event's path - on already parsed data. Remember though that they do have limited f... See more...
I don't have much experience with ingest actions but my understanding is that they indeed can be called later in the event's path - on already parsed data. Remember though that they do have limited functionality.
I just tried, but unfortunately this is not working. I'm still running into the same issue where the search is not using the JavaScript variable. In the below code, I even tried "+splQuery+" but noth... See more...
I just tried, but unfortunately this is not working. I'm still running into the same issue where the search is not using the JavaScript variable. In the below code, I even tried "+splQuery+" but nothing.   var splQuery = "| makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "" }); mysearch.settings.set("search", splQuery);​    
In the end, after struggling with this for several days, I thought to myself, this is leading me nowhere. I wanted to have the fields to follow each other like this: (TIME)(SUBSECOND) (HOST) I had... See more...
In the end, after struggling with this for several days, I thought to myself, this is leading me nowhere. I wanted to have the fields to follow each other like this: (TIME)(SUBSECOND) (HOST) I had the idea to concentrate on adding the whitespace before HOST, and not after TIME or SUBSECOND. This approach had also its problems because spaces at the start of the FORMAT string seem to be ignored, but here I managed to get around that: https://community.splunk.com/t5/Getting-Data-In/Force-inclusion-of-space-character-as-a-first-character-in/m-p/709157 This way I could let go the question of addomg whitespaces conditionally. Though I could not solve this very problem, my overall problam is now solved.