All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was ... See more...
Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was working previously until Splunk upgrade and I had to upgrade the add-on. So, I do not understand why it was working previously and then stop working.  
Thanks for your input ! Your explanations were clear but it does not explain how/why my index did not roll the buckets after reaching the maxTotalDataSizeMB of 5GB and went up to 35GB.
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this si... See more...
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this size and force the older warm buckets to cold to avoid saturation.   The doc :  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Indexesconf  maxTotalDataSizeMB = <nonnegative integer> * The maximum size of an index, in megabytes. * If an index grows larger than the maximum size, splunkd freezes the oldest data in the index. * This setting applies only to hot, warm, and cold buckets. It does not apply to thawed buckets. ... However the saturation dit happen with one of them, that is the issue i don't understand. My disk is 40GB, and the saturation of this specific index reached 35GB and thus reached the minimum disk space and thus failed my indexer. The rolling criteria was met, why didn't it rolled the buckets ?
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sur... See more...
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sure if this has been said here or not - just because you define something as a volume, doesn't mean that everything "physically located" in that directory is treated by Splunk as that volume. So if you define a volume like in your case: [volume:MyVolume] path = $SPLUNK_DB  you must explicitly use that volume when defining index parameters. Otherwise it will not be considered a part of this volume. In other words if your index has   coldPath = volume:MyVolume/myindexsaturated/colddb   this directory will be managed with normal per-index constraints as well as global volume-based constraints. But if you define it as coldPath = $SPLUNK_DB/myindexsaturated/colddb even though it is in exactly the same place on the disk, it is not considered part of that volume.
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanz... See more...
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanza in limits.conf - Splunk Documentation  
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a cr... See more...
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a criterium.  Buckets roll either because the index is too full, the bucket(s) are too old, or the maximum number of warm buckets has been reached.
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-... See more...
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-child(odd) td{color: red; } table tr:nth-child(even) td{color: green; } </style> </html> It looks like this :       What I actually need is to Select rows Containing  INFO / ERROR / WARNING and color them RED , BLUE , YELLOW
Ok thanks, i get this part, i'll try to rework the indexes.conf. But what i still don't get, and i really would like to know (it's quite important for me to know what was wrong before changing anythi... See more...
Ok thanks, i get this part, i'll try to rework the indexes.conf. But what i still don't get, and i really would like to know (it's quite important for me to know what was wrong before changing anything) is why it didn't work in the first place ? From what i read in the doc it should have work with a simple conf like this no ? Furthermore, using a Volume and maxVolumeDataSizeMB will help me monitor the global size of all indexes on my volume right ? But i need each indexes to possibly have a specific maxTotalDataSIzeMB and abide by it.  If it's not possible or limited (because of whatever reason) feel free to tell me. Thanks again !
Assuming your csv is called numbers.csv and the field if called number, try something like this index=* [| inputlookup numbers.csv | rename number as search | table search]
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CH... See more...
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CHART COLOURS FOR LEGEND */ .highcharts-legend .highcharts-series-0 .highcharts-point{ fill:#28a197; } .highcharts-legend .highcharts-series-1 .highcharts-point{ fill:#f47738; } .highcharts-legend .highcharts-series-2 .highcharts-point{ fill:#6f72af; } /*BAR CHART FILL AREA */ .highcharts-series-0 .highcharts-tracker-area { fill:#28a197; stroke:#28a197;} .highcharts-series-1 .highcharts-tracker-area { fill:#f47738; stroke: #f47738;} .highcharts-series-2 .highcharts-tracker-area { fill:#6f72af; stroke: #6f72af;} /* PIE CHART COLOURS */ .highcharts-color-0 { fill: #28a197 ; } .highcharts-color-1 { fill: #f47738; } .highcharts-color-2 { fill: #6f72af; } Bar charts broke first and we found if we replaced .highcharts-tracker-area with .highcharts-point then it fixed the bars but then allowed pie charts to be only one colour
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those ... See more...
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those terms, not some wildcarded variations (especially with wildcard not at the end of the search term).
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular r... See more...
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular row in DB can change. So regardless of how you decide that one row of your results has already been ingested, it won't be ingested again even if some "secondary" fields change their values. I don't know your data, I don't know what it represents. If you reconfigure your DB data onboarding process to ingest both states of your DB record (or whatever result set you're getting), you'll have in Splunk two separate  partly duplicated events and will have to handle it somehow in search-time.
Okay. Could you check/verify if you use the Distributed Monitoring Console and if the affected HFs are configured as Indexer under Settings --> Monitoring Console --> Settings --> General Setup?  Th... See more...
Okay. Could you check/verify if you use the Distributed Monitoring Console and if the affected HFs are configured as Indexer under Settings --> Monitoring Console --> Settings --> General Setup?  That could be the reason why the HeavyForwarder are configured as distributed search peers to monitor them in the DMC. So if the license manager on the same instance as the DMC is check the config files for the affected HFs and may remove them.
Can you please help to share full steps and path you updated to fix this issue?
I have a heavy's without master_uri and Manager_uri. They are luckely working okay besides the error. In etc/licenses is only download-trial folder. No forwarder.license
Hi @Crotyo , you should put the csv file in a lookup (called e.g. "my_lookup.csv", containing at least one field (e.g. "my_field") and then run a search like the following: index=* [ | inputlookup ... See more...
Hi @Crotyo , you should put the csv file in a lookup (called e.g. "my_lookup.csv", containing at least one field (e.g. "my_field") and then run a search like the following: index=* [ | inputlookup my_lookup.csv | rename my_field AS query | fields query ] | ... in this way you perform a search in full text search mode on all the events. Ciao. Giuseppe
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222" ... See more...
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222"  but it take way to long. Is there a faster way? these number does not have a seperate fields or am i searching in any fields. im just searching for any event log that contain these number. Can anyone help? Thanks.  
Okay, just to confirm master_uri and manager_uri is not set on the HF, right? Could you check what files are located under etc/licenses? 
Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the... See more...
Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the dates, you can then further multiply the seconds to get minutes/hours or days if you want to rather use that. This will give you the metric to tell you how much seconds/minutes/hours/days to expiry and you can then alert on it Ciao
Hello @Mandar.Kadam , Can you share the solution you got from the support? Regards, Amit Singh Bisht