All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table... See more...
Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table becaue that would complex things. CAn yiu please tell m eothe rways to do it maybe through join or something.   Much appreciated.
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with br... See more...
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with breaking the events and maintaining the JSON format for further field extractions.   Thanks, Tejas. 
Hello, Indeed, my concern is about performance for both indexation and search. Because I can see from time to time, that _indextime values is getting far from the event timestamp: with a gap from 5... See more...
Hello, Indeed, my concern is about performance for both indexation and search. Because I can see from time to time, that _indextime values is getting far from the event timestamp: with a gap from 5 to 100 seconds, when the average is 30ms(200ms at the worst) Actually, we are indexing json formatted logs, and yes, using indexed extraction = json (set in the source Type) So, I guess all fields are indexed automatically. Then, checking with walklex: | walklex index="my_index" type=field | search NOT field=" *" | stats list(distinct_values) by field It shows, for last 60min: events:10272 Number of fields: 403 And some of them with uniq values, like field list(distinct_values) sessionid 193249 220320 204598 201715 214656 183875 195165 196683 221079 204274 215453 186199 181808 198200 178018 192400 184038 176133 205139 205432 186822 174164 196244 185719 179251 197758 203770 190584 178399 "avoiding indexed fields is sound as a general rule of thumb" If I understand well, the best should be to avoid fields with large number of uniq values to be indexed, and index only fields with low number of possible values(success/failed, green/yellow/red...,) Then, in my case, what could be a better configuration to reduce number of indexed fields, and to index only the fields with low cardinality, as you mentioned? Again, thank you all for your time/support Regards
You are likely correct in that the UF does not read the log file twice, as it reads it initially the first time for a batch ingestion and then never again when monitoring. The log file does not upda... See more...
You are likely correct in that the UF does not read the log file twice, as it reads it initially the first time for a batch ingestion and then never again when monitoring. The log file does not update its modification time or size as it's never closed by the application. I believe the CRC would change when the oldest events are overwritten, as that occurs at the top of the file. But as pointed out by you and others, this is not desirable behaviour.  So, assuming none of the checks for change in the log file works. Do you have any ideas on how I can make the UF open and read the file, or what mechanisms that prevents it? As stated in the initial post I have already tried a few things, are there any more tricks?  
Hello @splunkreal, Yes, you'll need to define the outputs.conf on Indexers1 to forward the logs to indexers2. If you wish to send all the logs of indexers1 to indexers2, you can definitely use the d... See more...
Hello @splunkreal, Yes, you'll need to define the outputs.conf on Indexers1 to forward the logs to indexers2. If you wish to send all the logs of indexers1 to indexers2, you can definitely use the defaultGroup parameter in outputs.conf. However, if you wish to send only a few logs from indexers1 to indexers2, it would be better to use _TCP_ROUTING in inputs.conf for those particular inputs.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated..!! 
Hello @splunkreal, If you're using rex to extract the fields at search time, there's no way that Splunk will auto extract the fields that are part of your json_msg field.  However, you can write th... See more...
Hello @splunkreal, If you're using rex to extract the fields at search time, there's no way that Splunk will auto extract the fields that are part of your json_msg field.  However, you can write the regex and have the fields extracted at search time using the field extraction from Settings -> Fields -> Field Extraction and define it under the sourcetype. So that everytime you run index based search, the fields json_msg will be extracted automatically and then you can use | spath json_msg to extract the subsequent fields.  Alternatively, from the source if you are able to convert the whole string into JSON format, the nested json fields will be extracted automatically. Regards, Tejas. --- If the above solution helps, an upvote is appreciated..!!
Unfortunatly, I cannot change the behaviour of how the log is written to the extent required. I have increased the max file size and reduced the amount of events generated to prevent overwriting wit... See more...
Unfortunatly, I cannot change the behaviour of how the log is written to the extent required. I have increased the max file size and reduced the amount of events generated to prevent overwriting within the lifespan before a reset to prevent the scenario you described. Unfortunatly, this did not result in the UF monitoring the file. Thanks for the headsup of a potential issue.
First, the mock data doesn't seem to agree with "update cost daily".  Wouldn't the following make more sense? bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16 09:42:41.282 6/1/25 1 ... See more...
First, the mock data doesn't seem to agree with "update cost daily".  Wouldn't the following make more sense? bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16 09:42:41.282 6/1/25 1 1.4 2025-06-06 09:00:41.282 5/1/25 1 2.5 2025-05-25 09:42:41.282 5/1/25 1 2.2 2025-05-15 09:00:41.282 5/1/25 2 3.2 2025-05-14 09:42:41.282 5/1/25 2 3.3 2025-05-04 09:00:41.282 3/1/25 1 4.4 2025-03-23 09:42:41.282 3/1/25 1 5 2025-03-18 09:00:41.282 3/1/25 2 6 2025-03-13 09:42:41.282 3/1/25 2 6.3 2025-03-03 08:00:41.282 Secondly, when you say latest event "of the month", I assume "month" can be represented by bill_date.  Is this correct? This is the search you need: | stats latest(Cost) as Cost by bill_date ID
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 .... See more...
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 . I wan to know whats wrong in this script button can be seen but when it is clicked nothing happens even if i refresh the drilldown. This is the drilldown with the button in rowKey field. This is the script. require([ 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!', 'jquery' ], function(mvc, SearchManager, TableView, ignored, $) { // Define a simple cell renderer with a button var ActionButtonRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'rowKey'; }, render: function($td, cell) { $td.addClass('button-cell'); var rowKey = cell.value var $btn = $('<button class="btn btn-success">Mark Solved</button>'); $btn.on('click', function(e) { e.preventDefault(); e.stopPropagation(); var searchQuery = `| inputlookup sbc_warning.csv | eval rowKey=tostring(rowKey) | eval solved=if(rowKey="${rowKey}", "1", solved) | outputlookup sbc_warning.csv`; var writeSearch = new SearchManager({ id: "writeSearch_" + Math.floor(Math.random() * 100000), search: searchQuery, autostart: true }); writeSearch.on('search:done', function() { console.log("Search completed and lookup updated"); var panelSearch = mvc.Components.get('panel_search_id'); if (panelSearch) { panelSearch.startSearch(); console.log("Panel search restarted"); } }); }); $td.append($btn); } }); // Apply the renderer to the specified table var tableComponent = mvc.Components.get('sbc_alarm_table'); if (tableComponent) { tableComponent.getVisualization(function(tableView) { tableView.table.addCellRenderer(new ActionButtonRenderer()); tableView.table.render(); }); } });
/opt/splunk/bin/splunk btool props list XmlWinEventLog:Security --debug | grep MAX_DAYS_AGO /opt/splunk/etc/system/local/props.conf MAX_DAYS_AGO = 7 that should work, right? Present on all indexers... See more...
/opt/splunk/bin/splunk btool props list XmlWinEventLog:Security --debug | grep MAX_DAYS_AGO /opt/splunk/etc/system/local/props.conf MAX_DAYS_AGO = 7 that should work, right? Present on all indexers. All indexers restarted. (Splunk Enterprise 9.4.2) TIme to log a support call?
that does not work, once you remove the blacklist, it ingests the old events.....
I've done a rolling restart of the cluster and checked. Looks like it "should" work but doesn't. Since then, I tried this approach: put a "blacklist_all_WinEvent" app on the UF during initial start... See more...
I've done a rolling restart of the cluster and checked. Looks like it "should" work but doesn't. Since then, I tried this approach: put a "blacklist_all_WinEvent" app on the UF during initial start. Just an inputs.conf that has "blacklist1 = ." for all winevent sources. let the UF do it's initial thing and an hour later I remove that app from the UF and restart the UF whilst not optimal, that would do the trick for onboarding existing servers and automating that is easy enough.   Kind Regards Andre
Hi @Andre_  1) after props.conf 's update/creation, did you restart the splunkd on the indexer? 2) if yes for above, then pls use the btool command to check if the props.conf got applied or not(you... See more...
Hi @Andre_  1) after props.conf 's update/creation, did you restart the splunkd on the indexer? 2) if yes for above, then pls use the btool command to check if the props.conf got applied or not(you can search for splunk btool options here in communities).   if any reply helps you in any way, a karma point / upvote would be helpful for the author, thanks. 
I fix it by reset the proxy setting. I am able to access the web ui.  Thank you very much!!
The Splunk’s TZ has set to UTC on browser and workstation has correct PT TZ? What SSO/idp you are using?
yes, the time zone is wrong, I check with a user, they're located in PT time zone but their default time zone is UTC.
thanks for that. Exactly i was looking for
If they haven’t set any time zone then it’s their workstation’s time zone. Is this tz wrong or what is the issue? I’m afraid that you are trying to solve an issue which doesn’t exists!
As @gcusello , don't use join, that's the wrong way to do this, however, you are using the wrong field. Your rex statement is extracting the field called clients but your join is using client (singul... See more...
As @gcusello , don't use join, that's the wrong way to do this, however, you are using the wrong field. Your rex statement is extracting the field called clients but your join is using client (singular). Please use the lookup way to do this, not join.  
I use proxy to work around the port issue. I get the same thing as the curl command now. the web ui show nothing, and I inspect it,  "browser-not-supported"? I try multiple browser(Chrome, Edge,... See more...
I use proxy to work around the port issue. I get the same thing as the curl command now. the web ui show nothing, and I inspect it,  "browser-not-supported"? I try multiple browser(Chrome, Edge, Firefox)