All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): ... See more...
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): <html> <style type="text/css"> .table-colored .table-row, td{ color:blue; } </style> </html> Does anyone know how to add conditions based on CELL Values e.g. if Cell = INFO -> Blue ,if Cell = ERROR -> Red , if Cell = WARNING -> Yellow
Thanks again
Thank you guys. This forum is really great.
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deploy... See more...
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deployment Server + Universal Forwarder].   I have used the Deployment server to push the inputs.conf to the designated universal forwarder which allocated on the domain controller server and enable the needed. then remove the wmi.conf and inputs.conf from the Windows TA-Add-On, and copy the rest to local folder and used the deployer to push the enhanced Windows TA to the search heads.   As per the below screen from the official doc the indexer is conditional:   Why should push the Add-on to the indexers even if there are an index time field extraction? As i am know the search head cluster will replicate all the knowledge bundle with the indexers so all the KOs will be replicated to the indexers and no need to push them, am i correct? Splunk Add-on for Microsoft Windows  Thanks in advance!!
Please remove parameter  master_uri = self  and try it again. If you get the same error please execute  splunk btool server list license --debug and share the output. 
Hi @richgalloway , thanks for your input, yes i only gave the configuration for one index because i mainly rely on the default conf written above for all my indexes on the disk, plus this specific i... See more...
Hi @richgalloway , thanks for your input, yes i only gave the configuration for one index because i mainly rely on the default conf written above for all my indexes on the disk, plus this specific index was the only one saturated, thus probably the issue here ? (please correct me if i'm wrong in this statement) For the volumes, i have one in my conf, but i'm not sure how it works and how it's used (i didn't write this conf file myself), i'll try to look into this subject. [volume:MyVolume] path = $SPLUNK_DB Thanks !
Hi! Thank you for your response I made the change below to my query, including the "ERROR" key using regex, and it works properly: index="idx_xxxx" | rex field=_raw "\"ERROR\":\"(?<ERROR>[^\... See more...
Hi! Thank you for your response I made the change below to my query, including the "ERROR" key using regex, and it works properly: index="idx_xxxx" | rex field=_raw "\"ERROR\":\"(?<ERROR>[^\"]+)\"" ....
You've showed the configuration for a single index, but no doubt there are other indexes on the same disk.  Those other indexes also consume disk space and help lead to a minFreeSpace situation. To ... See more...
You've showed the configuration for a single index, but no doubt there are other indexes on the same disk.  Those other indexes also consume disk space and help lead to a minFreeSpace situation. To better manage that, I recommend using volumes.  Create a volume (in indexes.conf) that is about the size of the disk (or the amount you want to use) and make the indexes part of that volume (using volume:foo references).  That will ensure the indexer considers the sizes of all indexes when deciding when to roll warm buckets.
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i re... See more...
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i recently observed that my indexer was not indexing logs received. The indexer is in a failure state because my partition $SPLUNK_DB reached the minFreeSpace allowed in server.conf. After further analysis it seems that one of the index _metrics on the partition is saturated with warm buckets (db_*) and taking all the space available. I however have configured all my indexes with the indexes.conf ($SPLUNK_HOME/etc/system/default/indexes.conf)      # index specific defaults maxTotalDataSizeMB = 5000 maxDataSize = 1000 maxMemMB = 5 maxGlobalRawDataSizeMB = 0 maxGlobalDataSizeMB = 0 rotatePeriodInSecs = 30 maxHotIdleSecs = 432000 maxHotSpanSecs = 7776000 maxHotBuckets = auto maxWarmDBCount = 300 frozenTimePeriodInSecs = 188697600 ... # there's more but i might not be able to disclose them or it might not be revelant [_metrics] coldPath = $SPLUNK_DB/_metrics/colddb homePath = $SPLUNK_DB/_metrics/db thawedPath = $SPLUNK_DB/_metrics/thaweddb frozenTimePeriodInSecs = 1209600     From what i understand with this conf applied the index should not exceed 5GB, and when reached the warm/hot buckets should be removed, but it seems that's it's not taken into account in my case.  The indexer work fine after purging the buckets and restarting it, but i don't get why the conf was not applied ? Is there something i didn't get here ? Is there a way to check the "characteristics" of my index once started ? -> Checked, the conf is correctly applied.   If you know anything on this subject please help me  thank you
Hi @mariojost , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thank you very much for steering me in the right direction. Your solution works best for me. One small issue i had with this that i did not anticipate: A switch could show up multiple times in the re... See more...
Thank you very much for steering me in the right direction. Your solution works best for me. One small issue i had with this that i did not anticipate: A switch could show up multiple times in the results as it was hitting the thresthold for more than one timeslot (two hours in a row). So i had to adjust my search a bit to account for that. What it does is add the counts of all the matching timeslots to only represent a unique hostname along with the total counts: index=network mnemonic=MACFLAP_NOTIF | timechart count by hostname span=1h usenull=f useother=f | untable _time hostname count | where count>5 | stats sum(count) by hostname Thank you very much again for your help.
I need help coloring rows FONT text based on a cell value  Logic is like below  case(match(value,"logLevel=INFO"),"#4f34eb",match(value,"logLevel=WARNING"),"#ffff00",match(value,"logLevel=ER... See more...
I need help coloring rows FONT text based on a cell value  Logic is like below  case(match(value,"logLevel=INFO"),"#4f34eb",match(value,"logLevel=WARNING"),"#ffff00",match(value,"logLevel=ERROR"),"#53A051",true(),"#ffffff")
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder lice... See more...
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder license is in order i think. looks like an easy process. I did below ./splunk edit licenser-groups Forwarder -is_active 1 this will set in /opt/splunk/etc/system/local/server.conf  the settings below: [license] active_group = Forwarder and [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder Have to set by myself master_uri = self restart the server and it looks like a go. When i do this on every heavy it will give the error in _internal Duplicate license hash: [FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD], also present on peer . And Duplicated license situation not fixed in time (72-hour grace period). Disabling peer.. Why?? Is it realy going to disable the forwarders when it uses the forwarder.license? What to do about it, where did i go wrong?   Thanks in advance   greetz  Jari
Hi @sainag_splunk,  I am using Splunk Enterprise (Splunk-9.2.1-78803f08aabb.x86_64.rpm) on SLES 12 SP5. Splunk is using  its built-in python 3.7. After more investigation I found out that the proble... See more...
Hi @sainag_splunk,  I am using Splunk Enterprise (Splunk-9.2.1-78803f08aabb.x86_64.rpm) on SLES 12 SP5. Splunk is using  its built-in python 3.7. After more investigation I found out that the problem also occured on Splunk version 9.1.0.1 so it might not be linked to a recent update or change. I found out that the problem occures on the views that are using the template [splunk_home]/.../templates/layout/base.html. Do you have any clues about what could cause the problem ?  Thanks for you reply ! 
I create notable manually and i update next actions at the same time when i create notable
Thanks for the valuable advertising! I don't need this app in general, but I can use queries from dashboard as examples to resolve my particular case.
But that will not show the apps column at the left side. Correct me if I am wrong, thanks  
@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your des... See more...
@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your desired output.
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered y... See more...
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered yet another but. Just raise a case with support please.
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do ... See more...
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do normal stats. This way you get a separate result row for each hour for each device. Another approach is you do the timechart first as you originally did ( @yuanliu 's remark about your "non-SPL SPL" is still valid) | timechart count span=1h usenull=f (watch out for useother and limit - use them if yo uneed them) But the timechart produces separate timeseries points within one result row. So you need to... | untable _time hostname count This way you'll get your separate data points for each device for each hour in separate rows. Now you can filter it easily with | where count>50