All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes... See more...
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes.conf) that seems to me they are related to indexer cluster not search heads cluster, am I true? 2- what is "cim_modactions index definition is used with the common action model alerts and auditing", i didnt know the actual meaning? Splunk Common Information Model (CIM) 
I do feel a bit stupid now.. My Cron was wrong. The method was perfectly sane. I did struggle to find any actual documentation to say that this was a way of doing it, so I hope this question will h... See more...
I do feel a bit stupid now.. My Cron was wrong. The method was perfectly sane. I did struggle to find any actual documentation to say that this was a way of doing it, so I hope this question will help future searchers determine that. Thanks for helping my grey matter along
Hi @Crotyo , could you share your search? Ciao. Giuseppe
OK. Did you verify what Splunk actually sees? | rest /data/indexes/myindex Some of this info you can also see in Settings->Indexes  
I am getting ready to attempt the rapid7 Nexpose addon. Did it end up working for you? I am wondering if there is a better approach since the app only has two stars on splunk base and is not a splunk... See more...
I am getting ready to attempt the rapid7 Nexpose addon. Did it end up working for you? I am wondering if there is a better approach since the app only has two stars on splunk base and is not a splunk supported app. 
I tried that and the search return empty. I tried checking the inputlookup command and it did list all the numbers.
I did try that and the search result return empty.
And you checked your effective settings with btool?
Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was ... See more...
Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was working previously until Splunk upgrade and I had to upgrade the add-on. So, I do not understand why it was working previously and then stop working.  
Thanks for your input ! Your explanations were clear but it does not explain how/why my index did not roll the buckets after reaching the maxTotalDataSizeMB of 5GB and went up to 35GB.
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this si... See more...
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this size and force the older warm buckets to cold to avoid saturation.   The doc :  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Indexesconf  maxTotalDataSizeMB = <nonnegative integer> * The maximum size of an index, in megabytes. * If an index grows larger than the maximum size, splunkd freezes the oldest data in the index. * This setting applies only to hot, warm, and cold buckets. It does not apply to thawed buckets. ... However the saturation dit happen with one of them, that is the issue i don't understand. My disk is 40GB, and the saturation of this specific index reached 35GB and thus reached the minimum disk space and thus failed my indexer. The rolling criteria was met, why didn't it rolled the buckets ?
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sur... See more...
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sure if this has been said here or not - just because you define something as a volume, doesn't mean that everything "physically located" in that directory is treated by Splunk as that volume. So if you define a volume like in your case: [volume:MyVolume] path = $SPLUNK_DB  you must explicitly use that volume when defining index parameters. Otherwise it will not be considered a part of this volume. In other words if your index has   coldPath = volume:MyVolume/myindexsaturated/colddb   this directory will be managed with normal per-index constraints as well as global volume-based constraints. But if you define it as coldPath = $SPLUNK_DB/myindexsaturated/colddb even though it is in exactly the same place on the disk, it is not considered part of that volume.
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanz... See more...
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanza in limits.conf - Splunk Documentation  
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a cr... See more...
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a criterium.  Buckets roll either because the index is too full, the bucket(s) are too old, or the maximum number of warm buckets has been reached.
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-... See more...
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-child(odd) td{color: red; } table tr:nth-child(even) td{color: green; } </style> </html> It looks like this :       What I actually need is to Select rows Containing  INFO / ERROR / WARNING and color them RED , BLUE , YELLOW
Ok thanks, i get this part, i'll try to rework the indexes.conf. But what i still don't get, and i really would like to know (it's quite important for me to know what was wrong before changing anythi... See more...
Ok thanks, i get this part, i'll try to rework the indexes.conf. But what i still don't get, and i really would like to know (it's quite important for me to know what was wrong before changing anything) is why it didn't work in the first place ? From what i read in the doc it should have work with a simple conf like this no ? Furthermore, using a Volume and maxVolumeDataSizeMB will help me monitor the global size of all indexes on my volume right ? But i need each indexes to possibly have a specific maxTotalDataSIzeMB and abide by it.  If it's not possible or limited (because of whatever reason) feel free to tell me. Thanks again !
Assuming your csv is called numbers.csv and the field if called number, try something like this index=* [| inputlookup numbers.csv | rename number as search | table search]
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CH... See more...
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CHART COLOURS FOR LEGEND */ .highcharts-legend .highcharts-series-0 .highcharts-point{ fill:#28a197; } .highcharts-legend .highcharts-series-1 .highcharts-point{ fill:#f47738; } .highcharts-legend .highcharts-series-2 .highcharts-point{ fill:#6f72af; } /*BAR CHART FILL AREA */ .highcharts-series-0 .highcharts-tracker-area { fill:#28a197; stroke:#28a197;} .highcharts-series-1 .highcharts-tracker-area { fill:#f47738; stroke: #f47738;} .highcharts-series-2 .highcharts-tracker-area { fill:#6f72af; stroke: #6f72af;} /* PIE CHART COLOURS */ .highcharts-color-0 { fill: #28a197 ; } .highcharts-color-1 { fill: #f47738; } .highcharts-color-2 { fill: #6f72af; } Bar charts broke first and we found if we replaced .highcharts-tracker-area with .highcharts-point then it fixed the bars but then allowed pie charts to be only one colour
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those ... See more...
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those terms, not some wildcarded variations (especially with wildcard not at the end of the search term).
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular r... See more...
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular row in DB can change. So regardless of how you decide that one row of your results has already been ingested, it won't be ingested again even if some "secondary" fields change their values. I don't know your data, I don't know what it represents. If you reconfigure your DB data onboarding process to ingest both states of your DB record (or whatever result set you're getting), you'll have in Splunk two separate  partly duplicated events and will have to handle it somehow in search-time.