All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

MAX_DAYS_AGO doesn't cut it. It will make Splunk still index the events but it will assume that the timestamp parsed from the event was wrong so it would just assume another timestamp (whatever that ... See more...
MAX_DAYS_AGO doesn't cut it. It will make Splunk still index the events but it will assume that the timestamp parsed from the event was wrong so it would just assume another timestamp (whatever that would effectively be). @Andre_ What you could to to prevent some data from being indexed could be to add whitelists/blacklists in inputs matching certain timestamp values. That's a relatively ugly solution and is not something to be kept forever but for a start - might be the way to go. Just be careful because you're doing it differently when you're windows logs as "classic" and differently while it's XML.
You need to be a bit more precise about the requirements but generally it indeed looks like a case for proper sorting data and using dedup so that it only "catches" the first result for any given com... See more...
You need to be a bit more precise about the requirements but generally it indeed looks like a case for proper sorting data and using dedup so that it only "catches" the first result for any given combination of fields.
@Emre  You can use Splunk’s Field Extractions (props/transforms) or rex in your SPL to extract fields at search time For Eg: | rex field=_raw "Module:(?<Module>[^\n]+)" | rex field=_raw "Microflo... See more...
@Emre  You can use Splunk’s Field Extractions (props/transforms) or rex in your SPL to extract fields at search time For Eg: | rex field=_raw "Module:(?<Module>[^\n]+)" | rex field=_raw "Microflow:\s*(?<Microflow>[^\n]+)" | rex field=_raw "latesteror_message:(?<latesteror_message>[^\n]+)" | rex field=_raw "http status:\s*(?<http_status>\d+)" | rex field=_raw "Http reasonphrase\s*(?<Http_reasonphrase>[^\n]+)" But best practice is to structure the data at source itself. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Not at this time. Splunk can auto-extract values only if the whole _raw message consists of the structured data blob. There is an open idea on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-... See more...
Not at this time. Splunk can auto-extract values only if the whole _raw message consists of the structured data blob. There is an open idea on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-208 It is marked as future prospect but of course voting on this issue might provide some additional push. The alternative would be to cut the remainder of the event so that only the json part is left but this way you're losing some data.
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to ext... See more...
Good day everyone, ia m new to Splunk and i need some suggestions.  We are sending our Mendix logs to SplunkCloud, but our logs are sent to Splunk as a single event.  Is that possible for me to extract the fields from the message part? Example Module:SplunkTest Microflow: ACT_Omnext_Create latesteror_message:Access denied.. http status: 401 Http reasonphrase Access denied... Or is this data should be structured from Mendix and send to Splunk? Thanks for any suggestion.
Well... There are multiple things here. 1. _time vs _indextime - this doesn't have to have anything to do with the performance of the indexing pipeline. There can be multiple reasons for this from p... See more...
Well... There are multiple things here. 1. _time vs _indextime - this doesn't have to have anything to do with the performance of the indexing pipeline. There can be multiple reasons for this from pipelines clogging to drifting time on the sources. It would need more detailed troubleshooting to find out the reason behind this. 2. As a rule of thumb, indexed extractions are bad. While sometimes it is the "only way" using built-in Splunk mechanisms (for example - ingesting CSVs with variable order of columns), it is generally better to have the data pre-processed with external tool to transform it to a format more suitable for normal indexing. 3. Since you are talking about json data with indexed extractions, I suspect you're using the bulit-in _json sourcetype which should not be used in production. The way to go would be to define your own sourcetype using kv_mode=json but not using indexed extractions. You can't selectively not index some fields when using indexed extractions. And single indexed fields are very tricky with structured data. I wouldn't recommend it.  
@sverdhan  Try below with clients, | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | lookup appserverdomainmap... See more...
@sverdhan  Try below with clients, | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | lookup appserverdomainmapping.csv clients OUTPUT NewIndex, Domain, Sourcetype | eval NewIndex=NewIndex.sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex If you do not need to add clients, and to just display lookup fields you can use appendcols | tstats count WHERE index=* by index sourcetype | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | appendcols [| inputlookup appserverdomainmapping.csv | fields Domain, Sourcetype, NewIndex] | eval NewIndex=NewIndex.sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@sgarcia  Blank h (host) values in license usage logs are due to Splunk's "squashing" process. This occurs to optimize log storage and performance when there are too many unique values to track ind... See more...
@sgarcia  Blank h (host) values in license usage logs are due to Splunk's "squashing" process. This occurs to optimize log storage and performance when there are too many unique values to track individually If possible, analyze your actual event data Eg: index=wineventlog | stats sum(len(_raw)) as bytes by host | eval GB=round(bytes/1024/1024/1024, 2) | sort -GB Also check Licensing usage reports and to split by host for last 60 days if available or try from metrics.log to identify which ones have high thruput. index="_internal" source="*metrics.log" group="per_host_thruput" Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @sgarcia, This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be fro... See more...
Hello @sgarcia, This behavior in the logs is because of squash threshold limit being hit for license_usage.log for h,s, st fields. Additional approach to measure the volume of ingestion would be from metrics.log by using either per_host_thruput, per_source_thruput, per_sourcetype_thruput. With this thruput, you can look at the series field to know which component has the highest volume. Additionally, the squash_threshold can be configured in limits.conf, but it is NOT advisable to update the limits without consulting Splunk Support because it can cause heavy memory issues if increased from the default value. Thanks, Tejas. 
Hello @vishalduttauk, Can you provide the complete search that you used for migrating the data from one server to other? Thanks, Tejas. 
Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table... See more...
Hello  Giuseppe, Thanks much for your suggestion , bit the query is giving an error : Cannot find client in the source field client in the lookup table . Now, we cant add clients in th elookup table becaue that would complex things. CAn yiu please tell m eothe rways to do it maybe through join or something.   Much appreciated.
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with br... See more...
Hello @kumva01, Can you explain more about what is the source and how are you trying to ingest the data? What will be the data flow? Just the props configuration will not be able to help you with breaking the events and maintaining the JSON format for further field extractions.   Thanks, Tejas. 
Hello, Indeed, my concern is about performance for both indexation and search. Because I can see from time to time, that _indextime values is getting far from the event timestamp: with a gap from 5... See more...
Hello, Indeed, my concern is about performance for both indexation and search. Because I can see from time to time, that _indextime values is getting far from the event timestamp: with a gap from 5 to 100 seconds, when the average is 30ms(200ms at the worst) Actually, we are indexing json formatted logs, and yes, using indexed extraction = json (set in the source Type) So, I guess all fields are indexed automatically. Then, checking with walklex: | walklex index="my_index" type=field | search NOT field=" *" | stats list(distinct_values) by field It shows, for last 60min: events:10272 Number of fields: 403 And some of them with uniq values, like field list(distinct_values) sessionid 193249 220320 204598 201715 214656 183875 195165 196683 221079 204274 215453 186199 181808 198200 178018 192400 184038 176133 205139 205432 186822 174164 196244 185719 179251 197758 203770 190584 178399 "avoiding indexed fields is sound as a general rule of thumb" If I understand well, the best should be to avoid fields with large number of uniq values to be indexed, and index only fields with low number of possible values(success/failed, green/yellow/red...,) Then, in my case, what could be a better configuration to reduce number of indexed fields, and to index only the fields with low cardinality, as you mentioned? Again, thank you all for your time/support Regards
You are likely correct in that the UF does not read the log file twice, as it reads it initially the first time for a batch ingestion and then never again when monitoring. The log file does not upda... See more...
You are likely correct in that the UF does not read the log file twice, as it reads it initially the first time for a batch ingestion and then never again when monitoring. The log file does not update its modification time or size as it's never closed by the application. I believe the CRC would change when the oldest events are overwritten, as that occurs at the top of the file. But as pointed out by you and others, this is not desirable behaviour.  So, assuming none of the checks for change in the log file works. Do you have any ideas on how I can make the UF open and read the file, or what mechanisms that prevents it? As stated in the initial post I have already tried a few things, are there any more tricks?  
Hello @splunkreal, Yes, you'll need to define the outputs.conf on Indexers1 to forward the logs to indexers2. If you wish to send all the logs of indexers1 to indexers2, you can definitely use the d... See more...
Hello @splunkreal, Yes, you'll need to define the outputs.conf on Indexers1 to forward the logs to indexers2. If you wish to send all the logs of indexers1 to indexers2, you can definitely use the defaultGroup parameter in outputs.conf. However, if you wish to send only a few logs from indexers1 to indexers2, it would be better to use _TCP_ROUTING in inputs.conf for those particular inputs.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated..!! 
Hello @splunkreal, If you're using rex to extract the fields at search time, there's no way that Splunk will auto extract the fields that are part of your json_msg field.  However, you can write th... See more...
Hello @splunkreal, If you're using rex to extract the fields at search time, there's no way that Splunk will auto extract the fields that are part of your json_msg field.  However, you can write the regex and have the fields extracted at search time using the field extraction from Settings -> Fields -> Field Extraction and define it under the sourcetype. So that everytime you run index based search, the fields json_msg will be extracted automatically and then you can use | spath json_msg to extract the subsequent fields.  Alternatively, from the source if you are able to convert the whole string into JSON format, the nested json fields will be extracted automatically. Regards, Tejas. --- If the above solution helps, an upvote is appreciated..!!
Unfortunatly, I cannot change the behaviour of how the log is written to the extent required. I have increased the max file size and reduced the amount of events generated to prevent overwriting wit... See more...
Unfortunatly, I cannot change the behaviour of how the log is written to the extent required. I have increased the max file size and reduced the amount of events generated to prevent overwriting within the lifespan before a reset to prevent the scenario you described. Unfortunatly, this did not result in the UF monitoring the file. Thanks for the headsup of a potential issue.
First, the mock data doesn't seem to agree with "update cost daily".  Wouldn't the following make more sense? bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16 09:42:41.282 6/1/25 1 ... See more...
First, the mock data doesn't seem to agree with "update cost daily".  Wouldn't the following make more sense? bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16 09:42:41.282 6/1/25 1 1.4 2025-06-06 09:00:41.282 5/1/25 1 2.5 2025-05-25 09:42:41.282 5/1/25 1 2.2 2025-05-15 09:00:41.282 5/1/25 2 3.2 2025-05-14 09:42:41.282 5/1/25 2 3.3 2025-05-04 09:00:41.282 3/1/25 1 4.4 2025-03-23 09:42:41.282 3/1/25 1 5 2025-03-18 09:00:41.282 3/1/25 2 6 2025-03-13 09:42:41.282 3/1/25 2 6.3 2025-03-03 08:00:41.282 Secondly, when you say latest event "of the month", I assume "month" can be represented by bill_date.  Is this correct? This is the search you need: | stats latest(Cost) as Cost by bill_date ID
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 .... See more...
So I am using this script , what this script does it it creates a solved button on each field of dashboard drilldown and when that button it clicked it changes the value of solved field to 1 from 0 . I wan to know whats wrong in this script button can be seen but when it is clicked nothing happens even if i refresh the drilldown. This is the drilldown with the button in rowKey field. This is the script. require([ 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!', 'jquery' ], function(mvc, SearchManager, TableView, ignored, $) { // Define a simple cell renderer with a button var ActionButtonRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'rowKey'; }, render: function($td, cell) { $td.addClass('button-cell'); var rowKey = cell.value var $btn = $('<button class="btn btn-success">Mark Solved</button>'); $btn.on('click', function(e) { e.preventDefault(); e.stopPropagation(); var searchQuery = `| inputlookup sbc_warning.csv | eval rowKey=tostring(rowKey) | eval solved=if(rowKey="${rowKey}", "1", solved) | outputlookup sbc_warning.csv`; var writeSearch = new SearchManager({ id: "writeSearch_" + Math.floor(Math.random() * 100000), search: searchQuery, autostart: true }); writeSearch.on('search:done', function() { console.log("Search completed and lookup updated"); var panelSearch = mvc.Components.get('panel_search_id'); if (panelSearch) { panelSearch.startSearch(); console.log("Panel search restarted"); } }); }); $td.append($btn); } }); // Apply the renderer to the specified table var tableComponent = mvc.Components.get('sbc_alarm_table'); if (tableComponent) { tableComponent.getVisualization(function(tableView) { tableView.table.addCellRenderer(new ActionButtonRenderer()); tableView.table.render(); }); } });
/opt/splunk/bin/splunk btool props list XmlWinEventLog:Security --debug | grep MAX_DAYS_AGO /opt/splunk/etc/system/local/props.conf MAX_DAYS_AGO = 7 that should work, right? Present on all indexers... See more...
/opt/splunk/bin/splunk btool props list XmlWinEventLog:Security --debug | grep MAX_DAYS_AGO /opt/splunk/etc/system/local/props.conf MAX_DAYS_AGO = 7 that should work, right? Present on all indexers. All indexers restarted. (Splunk Enterprise 9.4.2) TIme to log a support call?