All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your des... See more...
@PickleRick is correct.  I forgot that your groupby will produce multiple columns in timechart, therefore a simple "where" command cannot do the job.  @gcusello 's solution is the closest to your desired output.
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered y... See more...
As you can see in the comments in this thread. This bug was fixed ages ago. Then another one popped up and was fixed. If you still have a problem with this functionality, you might have encountered yet another but. Just raise a case with support please.
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do ... See more...
There are two possible approaches to this problem. (both boil down to the same thing but work slightly differently). One is @gcusello 's approach - you bin your data into one hour slots and just do normal stats. This way you get a separate result row for each hour for each device. Another approach is you do the timechart first as you originally did ( @yuanliu 's remark about your "non-SPL SPL" is still valid) | timechart count span=1h usenull=f (watch out for useother and limit - use them if yo uneed them) But the timechart produces separate timeseries points within one result row. So you need to... | untable _time hostname count This way you'll get your separate data points for each device for each hour in separate rows. Now you can filter it easily with | where count>50
@gcuselloSplunk can lock you out if you repeatedly misauthenticate. From authentication.conf.spec: lockoutUsers = <boolean> * Specifies whether locking out users is enabled. * This setting is optio... See more...
@gcuselloSplunk can lock you out if you repeatedly misauthenticate. From authentication.conf.spec: lockoutUsers = <boolean> * Specifies whether locking out users is enabled. * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: true (users are locked out on incorrect logins) lockoutMins = <positive integer> * The number of minutes that a user is locked out after entering an incorrect password more than 'lockoutAttempts' times in 'lockoutThresholdMins' minutes. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 1440 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 30 lockoutAttempts = <positive integer> * The number of unsuccessful login attempts that can occur before a user is locked out. * The unsuccessful login attempts must occur within 'lockoutThresholdMins' minutes. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 64 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 5 lockoutThresholdMins = <positive integer> * Specifies the number of minutes that must pass from the time of the first failed login before the failed login attempt counter resets. * Any value less than 1 is ignored. * Minimum value: 1 * Maximum value: 120 * This setting is optional. * If you enable this setting on members of a search head cluster, user lockout state applies only per SHC member, not to the entire cluster. * Default: 5  The same can be set in GUI @SiddharthnegiThese above are global settings so they are not user-specific. If your user is getting locked out they must be providing wrong authentication data repeatedly.
Actually, you can set a home app on a per-user basis. So you'd have to reconfigure all users to have your app as the home app. But frankly, as a user I would probably be annoyed if someone did this ... See more...
Actually, you can set a home app on a per-user basis. So you'd have to reconfigure all users to have your app as the home app. But frankly, as a user I would probably be annoyed if someone did this to my account.
The illustrated SPL is really confusing.  First, where clause is not a supported syntax in timechart.  Second, if your requirement is count > 50, why are you concerned about top 5? @gcusello's solut... See more...
The illustrated SPL is really confusing.  First, where clause is not a supported syntax in timechart.  Second, if your requirement is count > 50, why are you concerned about top 5? @gcusello's solution should meet your requirement.  Alternatively, if you still want timechart format, this would do index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname | where count > 50
The thing is that regex must match your data properly so we can't just "assume" something out of the blue. You can fiddle with the regex for yourself (and see how and why it works) https://regex101... See more...
The thing is that regex must match your data properly so we can't just "assume" something out of the blue. You can fiddle with the regex for yourself (and see how and why it works) https://regex101.com/r/VaY5Qn/1
Hi @viku7474 , you have to create you Home Page Dashboard as the default dashboard in this app, in a dedicated app. Then, you have to give to the dashboard and the app the correct grants (Read) ass... See more...
Hi @viku7474 , you have to create you Home Page Dashboard as the default dashboard in this app, in a dedicated app. Then, you have to give to the dashboard and the app the correct grants (Read) assigning it to the roles of your users. At least, you have to assign to all the roles, the Home Page app as default app. Ciao. Giuseppe
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side... See more...
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side. How to achieve it? Right now, this works only for me and that too it lands in the quick links tab. Also, I can have the dashboard as default under navigation menu but it will not show apps column which is on the left side. 
I am also facing the same issue, Did you find any solution?
Hi @Siddharthnegi , which data source are you speaking of? Splunk or Windows or what else? In Splunk, for my knowledge, an account cannot be locked, so maybe you're speaking of Windows, in this cas... See more...
Hi @Siddharthnegi , which data source are you speaking of? Splunk or Windows or what else? In Splunk, for my knowledge, an account cannot be locked, so maybe you're speaking of Windows, in this case, you cannot find windows logs in Splunk internal indexes, but in another one (maybe wineventlog or windows). Ciao. Giuseppe
Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and someth... See more...
Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and somethimes there are only two words without any other rule, it's not possible to define a regex; if instead there's some additional rule in the firstfields or in the nextfield, it's possible to identify a regex. Ciao. Giuseppe
Hi @mariojost , please try with stats: index=network mnemonic=MACFLAP_NOTIF | bin span=1h -time | stats count BY hostname _time | where count>0 Ciao. Giuseppe  
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are... See more...
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are checked if their REGEX matches and are executed if it does.  So if you want to selectively index only chosen events you must first make sure that all events are sent to nullQueue and then another transform applied afterwards will overwrite the already overwritten destination to indexQueue making sure those few events are kept.
@doeh- I checked your App code and apparently you have many hard-coded paths in the code, which will not work in the clustered environment and specifically in the search-head-clustered environment. ... See more...
@doeh- I checked your App code and apparently you have many hard-coded paths in the code, which will not work in the clustered environment and specifically in the search-head-clustered environment.   This is not recommended, hence use Splunk rest endpoints for all the file modifications: Lookups can be updated/created with rest endpoint Do not use hard-coded splunk home path (/opt/splunk/)  with this import statement (from splunk.clilib.bundle_paths import make_splunkhome_path) and so on.   I hope this helps!!! Kindly upvote if it helps!!!
Thanks @PickleRick , it worked! It was the issue with the order of transforms as you pointed, I have adjusted it and now I am able to filter out only the Filter out specific events and discard the r... See more...
Thanks @PickleRick , it worked! It was the issue with the order of transforms as you pointed, I have adjusted it and now I am able to filter out only the Filter out specific events and discard the rest.
1. Does calls on C++ layer are considered in overall calls ? 2. Suppose there is one transaction which flows from Web Server to Java to Node.Js then it will counted as 3 calls or one call? 
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wron... See more...
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wrong and we want to raise an alarm for that. The problem is, i cannot simply search back in the last hour and show me devices that have more than 50 results, because i would not catch the issue that existed 5h ago. So i am looking into timecharts that do a statistic every hour and then i want to filter out the "charts" that have less than x per slot. What i came up with is this (searching the last 24h): index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname where max in top5 But this does not work as i still get all the timechart slots where i have 0 or less than 50 logs logs. So imagine following data: switch01 08:00-09:00 0 logs switch01 09:00-10:00 8 logs switch01 10:00-11:00 54 logs switch01 11:00-12:00 61 logs switch01 12::00-13:00 42 logs switch02 08:00-09:00 6 logs switch02 09:00-10:00 8 logs switch02 10:00-11:00 33 logs switch02 11:00-12:00 29 logs switch02 12::00-13:00 65 logs So my ideal search would return to me following lines: Time Hostname Results 10:00-11:00 switch01 54 11:00-12:00 switch01 61 12::00-13:00 switch02 65   The time is not that important, im looking more for the results based on the amount of the result.  Any help is appreciated.
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his ... See more...
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his locked account.
%7N is not valid, it will support %9N and parse the 7 digit timestamp data correctly including the time zone, but %9N is actually broken in that it will ONLY recognise microseconds (6 places) See th... See more...
%7N is not valid, it will support %9N and parse the 7 digit timestamp data correctly including the time zone, but %9N is actually broken in that it will ONLY recognise microseconds (6 places) See this example where nanoseconds 701 and 702 are in two fields - when parsed and reconstructed, the times are the same with only microseconds | makeresults | eval time1="2024-11-25T01:45:03.512993701-05:00" | eval time2="2024-11-25T01:45:03.512993702-05:00" | eval tester_N=strptime(time1, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tt_N=strftime(tester_N, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tester_N2=strptime(time2, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tt_N2=strftime(tester_N2, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval isSame=if(tester_N2=tester_N,"true","false")