All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I need help coloring rows FONT text based on a cell value  Logic is like below  case(match(value,"logLevel=INFO"),"#4f34eb",match(value,"logLevel=WARNING"),"#ffff00",match(value,"logLevel=ER... See more...
I need help coloring rows FONT text based on a cell value  Logic is like below  case(match(value,"logLevel=INFO"),"#4f34eb",match(value,"logLevel=WARNING"),"#ffff00",match(value,"logLevel=ERROR"),"#53A051",true(),"#ffffff")
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder lice... See more...
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder license is in order i think. looks like an easy process. I did below ./splunk edit licenser-groups Forwarder -is_active 1 this will set in /opt/splunk/etc/system/local/server.conf  the settings below: [license] active_group = Forwarder and [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder Have to set by myself master_uri = self restart the server and it looks like a go. When i do this on every heavy it will give the error in _internal Duplicate license hash: [FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD], also present on peer . And Duplicated license situation not fixed in time (72-hour grace period). Disabling peer.. Why?? Is it realy going to disable the forwarders when it uses the forwarder.license? What to do about it, where did i go wrong?   Thanks in advance   greetz  Jari
Hi @sainag_splunk,  I am using Splunk Enterprise (Splunk-9.2.1-78803f08aabb.x86_64.rpm) on SLES 12 SP5. Splunk is using  its built-in python 3.7. After more investigation I found out that the proble... See more...
Hi @sainag_splunk,  I am using Splunk Enterprise (Splunk-9.2.1-78803f08aabb.x86_64.rpm) on SLES 12 SP5. Splunk is using  its built-in python 3.7. After more investigation I found out that the problem also occured on Splunk version 9.1.0.1 so it might not be linked to a recent update or change. I found out that the problem occures on the views that are using the template [splunk_home]/.../templates/layout/base.html. Do you have any clues about what could cause the problem ?  Thanks for you reply ! 
I create notable manually and i update next actions at the same time when i create notable
Thanks for the valuable advertising! I don't need this app in general, but I can use queries from dashboard as examples to resolve my particular case.
But that will not show the apps column at the left side. Correct me if I am wrong, thanks  
@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your des... See more...
@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your desired output.
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered y... See more...
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered yet another but. Just raise a case with support please.
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do ... See more...
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do normal stats. This way you get a separate result row for each hour for each device. Another approach is you do the timechart first as you originally did ( @yuanliu 's remark about your "non-SPL SPL" is still valid) | timechart count span=1h usenull=f (watch out for useother and limit - use them if yo uneed them) But the timechart produces separate timeseries points within one result row. So you need to... | untable _time hostname count This way you'll get your separate data points for each device for each hour in separate rows. Now you can filter it easily with | where count>50
@gcuselloSplunk can lock you out if you repeatedly misauthenticate. From authentication.conf.spec: lockoutUsers = <boolean> * Specifies whether locking out users is enabled. * This setting is optio... See more...
@gcuselloSplunk can lock you out if you repeatedly misauthenticate. From authentication.conf.spec: lockoutUsers = <boolean> * Specifies whether locking out users is enabled. * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: true (users are locked out on incorrect logins) lockoutMins = <positive integer> * The number of minutes that a user is locked out after entering an incorrect password more than 'lockoutAttempts' times in 'lockoutThresholdMins' minutes. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 1440 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 30 lockoutAttempts = <positive integer> * The number of unsuccessful login attempts that can occur before a user is locked out. * The unsuccessful login attempts must occur within 'lockoutThresholdMins' minutes. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 64 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 5 lockoutThresholdMins = <positive integer> * Specifies the number of minutes that must pass from the time of the first failed login before the failed login attempt counter resets. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 120 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 5  The same can be set in GUI @SiddharthnegiThese above are global settings so they are not user-specific. If your user is getting locked out they must be providing wrong authentication data repeatedly.
Actually, you can set a home app on a per-user basis. So you'd have to reconfigure all users to have your app as the home app. But frankly, as a user I would probably be annoyed if someone did this ... See more...
Actually, you can set a home app on a per-user basis. So you'd have to reconfigure all users to have your app as the home app. But frankly, as a user I would probably be annoyed if someone did this to my account.
The illustrated SPL is really confusing.  First, where clause is not a supported syntax in timechart.  Second, if your requirement is count > 50, why are you concerned about top 5? @gcusello's solut... See more...
The illustrated SPL is really confusing.  First, where clause is not a supported syntax in timechart.  Second, if your requirement is count > 50, why are you concerned about top 5? @gcusello's solution should meet your requirement.  Alternatively, if you still want timechart format, this would do index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname | where count > 50
The thing is that regex must match your data properly so we can't just "assume" something out of the blue. You can fiddle with the regex for yourself (and see how and why it works) https://regex101... See more...
The thing is that regex must match your data properly so we can't just "assume" something out of the blue. You can fiddle with the regex for yourself (and see how and why it works) https://regex101.com/r/VaY5Qn/1
Hi @viku7474 , you have to create you Home Page Dashboard as the default dashboard in this app, in a dedicated app. Then, you have to give to the dashboard and the app the correct grants (Read) ass... See more...
Hi @viku7474 , you have to create you Home Page Dashboard as the default dashboard in this app, in a dedicated app. Then, you have to give to the dashboard and the app the correct grants (Read) assigning it to the roles of your users. At least, you have to assign to all the roles, the Home Page app as default app. Ciao. Giuseppe
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side... See more...
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side. How to achieve it? Right now, this works only for me and that too it lands in the quick links tab. Also, I can have the dashboard as default under navigation menu but it will not show apps column which is on the left side. 
I am also facing the same issue, Did you find any solution?
Hi @Siddharthnegi , which data source are you speaking of? Splunk or Windows or what else? In Splunk, for my knowledge, an account cannot be locked, so maybe you're speaking of Windows, in this cas... See more...
Hi @Siddharthnegi , which data source are you speaking of? Splunk or Windows or what else? In Splunk, for my knowledge, an account cannot be locked, so maybe you're speaking of Windows, in this case, you cannot find windows logs in Splunk internal indexes, but in another one (maybe wineventlog or windows). Ciao. Giuseppe
Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and someth... See more...
Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and somethimes there are only two words without any other rule, it's not possible to define a regex; if instead there's some additional rule in the firstfields or in the nextfield, it's possible to identify a regex. Ciao. Giuseppe
Hi @mariojost , please try with stats: index=network mnemonic=MACFLAP_NOTIF | bin span=1h -time | stats count BY hostname _time | where count>0 Ciao. Giuseppe  
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are... See more...
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are checked if their REGEX matches and are executed if it does.  So if you want to selectively index only chosen events you must first make sure that all events are sent to nullQueue and then another transform applied afterwards will overwrite the already overwritten destination to indexQueue making sure those few events are kept.