All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and someth... See more...
Hi @darkins , ad also @PickleRick and @marnall said, the regex depends on the log, so it's difficoult to create a regex without some sample. If you have three words, separated by a space and somethimes there are only two words without any other rule, it's not possible to define a regex; if instead there's some additional rule in the firstfields or in the nextfield, it's possible to identify a regex. Ciao. Giuseppe
Hi @mariojost , please try with stats: index=network mnemonic=MACFLAP_NOTIF | bin span=1h -time | stats count BY hostname _time | where count>0 Ciao. Giuseppe  
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are... See more...
Yes, it can be a bit unintuitive at first if you are used to ACL-s and you expect the transforms list to just match at some point and don't continue. But it doesn't work this way. All transforms are checked if their REGEX matches and are executed if it does.  So if you want to selectively index only chosen events you must first make sure that all events are sent to nullQueue and then another transform applied afterwards will overwrite the already overwritten destination to indexQueue making sure those few events are kept.
@doeh- I checked your App code and apparently you have many hard-coded paths in the code, which will not work in the clustered environment and specifically in the search-head-clustered environment. ... See more...
@doeh- I checked your App code and apparently you have many hard-coded paths in the code, which will not work in the clustered environment and specifically in the search-head-clustered environment.   This is not recommended, hence use Splunk rest endpoints for all the file modifications: Lookups can be updated/created with rest endpoint Do not use hard-coded splunk home path (/opt/splunk/)  with this import statement (from splunk.clilib.bundle_paths import make_splunkhome_path) and so on.   I hope this helps!!! Kindly upvote if it helps!!!
Thanks @PickleRick , it worked! It was the issue with the order of transforms as you pointed, I have adjusted it and now I am able to filter out only the Filter out specific events and discard the r... See more...
Thanks @PickleRick , it worked! It was the issue with the order of transforms as you pointed, I have adjusted it and now I am able to filter out only the Filter out specific events and discard the rest.
1. Does calls on C++ layer are considered in overall calls ? 2. Suppose there is one transaction which flows from Web Server to Java to Node.Js then it will counted as 3 calls or one call? 
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wron... See more...
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wrong and we want to raise an alarm for that. The problem is, i cannot simply search back in the last hour and show me devices that have more than 50 results, because i would not catch the issue that existed 5h ago. So i am looking into timecharts that do a statistic every hour and then i want to filter out the "charts" that have less than x per slot. What i came up with is this (searching the last 24h): index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname where max in top5 But this does not work as i still get all the timechart slots where i have 0 or less than 50 logs logs. So imagine following data: switch01 08:00-09:00 0 logs switch01 09:00-10:00 8 logs switch01 10:00-11:00 54 logs switch01 11:00-12:00 61 logs switch01 12::00-13:00 42 logs switch02 08:00-09:00 6 logs switch02 09:00-10:00 8 logs switch02 10:00-11:00 33 logs switch02 11:00-12:00 29 logs switch02 12::00-13:00 65 logs So my ideal search would return to me following lines: Time Hostname Results 10:00-11:00 switch01 54 11:00-12:00 switch01 61 12::00-13:00 switch02 65   The time is not that important, im looking more for the results based on the amount of the result.  Any help is appreciated.
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his ... See more...
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his locked account.
%7N is not valid, it will support %9N and parse the 7 digit timestamp data correctly including the time zone, but %9N is actually broken in that it will ONLY recognise microseconds (6 places) See th... See more...
%7N is not valid, it will support %9N and parse the 7 digit timestamp data correctly including the time zone, but %9N is actually broken in that it will ONLY recognise microseconds (6 places) See this example where nanoseconds 701 and 702 are in two fields - when parsed and reconstructed, the times are the same with only microseconds | makeresults | eval time1="2024-11-25T01:45:03.512993701-05:00" | eval time2="2024-11-25T01:45:03.512993702-05:00" | eval tester_N=strptime(time1, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tt_N=strftime(tester_N, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tester_N2=strptime(time2, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval tt_N2=strftime(tester_N2, "%Y-%m-%dT%H:%M:%S.%9N%:z") | eval isSame=if(tester_N2=tester_N,"true","false")  
Even though this thread is old, it's perhaps worth noting the ability to use TERM and PREFIX with tstats, which I believe was introduced in Splunk 8 at the end of 2019, which would not have been poss... See more...
Even though this thread is old, it's perhaps worth noting the ability to use TERM and PREFIX with tstats, which I believe was introduced in Splunk 8 at the end of 2019, which would not have been possible when this question was written. https://conf.splunk.com/files/2020/slides/PLA1089C.pdf  
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by ... See more...
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by the monitoring console. While comprehensive, it doesn't include many things which SC just assumes are working correctly, primary among these the networks on which everything relies. For instance, if you have flapping port on switch or a configuration problem with a switch or router, this could potentially cause many interesting issues with SC about which MC would have no clue. Anyone ever made any moves along these lines? Charles
i guess the key is i think i need to say that field2 equals everything up to an m PRECEDED by a space?
not sure what else to put, this is what my data looks like   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3   i want these two lines to display as   field1               fiel... See more...
not sure what else to put, this is what my data looks like   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3   i want these two lines to display as   field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3
It is 12 years later, and this is still an issue. You cannot set 'requireClientCert=true' in server.conf on, for example, a Deployment Server, and have a working Web UI on that Deployment Server.  ... See more...
It is 12 years later, and this is still an issue. You cannot set 'requireClientCert=true' in server.conf on, for example, a Deployment Server, and have a working Web UI on that Deployment Server.  Setting 'requireClientCert=true' in server.conf still breaks the Web UI in late November 2024.
Hi @fatsug, Adding to what was previously discussed, you can break down the behavior and where it occurs by merging $SPLUNK_HOME/etc/system/default/props.conf and $SPLUNK_HOME/etc/apps/Splunk_TA_nix... See more...
Hi @fatsug, Adding to what was previously discussed, you can break down the behavior and where it occurs by merging $SPLUNK_HOME/etc/system/default/props.conf and $SPLUNK_HOME/etc/apps/Splunk_TA_nix/default/props.conf. @PickleRick's use of btool in the last question hints at how to do this: $SPLUNK_HOME/bin/splunk btool --debug props list lastlog The source type will also inherit [default] settings provided by any app and any additional [lastlog] settings you may have. Looking at the settings relevant to the question: [lastlog] KV_MODE = multi LINE_BREAKER = ^((?!))$ SHOULD_LINEMERGE = false TRUNCATE = 1000000 The LINE_BREAKER, SHOULD_LINEMERGE, and TRUNCATE settings tell Splunk the boundaries of the event.  These settings are used by the linebreaker and aggregator on a heavy forwarder or indexer; they are not, except under specific conditions, used by a universal forwarder. The SHOULD_LINEMERGE setting disables reassembly of multiple lines (delineated by the LINE_BREAKER setting) into a single event; in this case, the LINE_BREAKER setting does that work for us more efficiently. As @PickleRick noted, the regular expression ^((?!))$ matches nothing. When lastlog.sh is executed, its entire output up to TRUNCATE = 1000000 bytes (~1 MB) is indexed as one event: USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 If Splunk_TA_nix is not installed on the search head, then the sample above is what you would see as a single event in your search results. What happens if the lastlog.sh output is longer than 1,000,000 bytes? All data after the 1,000,000th byte is simply truncated and lost. For example: USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57 ... ~1,000,000 bytes of data jsmit If the "t" in "jsmit" is the 1,000,000th byte, then that's the last byte indexed, and everything after "jsmit" is truncated. If Splunk_TA_nix is installed on the search head, then the KV_MODE = multi setting tells Splunk to pipe the events through the multikv command before returning the results. On disk, there is only one event, and that event has been split into six separate events using the first line for field names. If Splunk_TA_nix is not installed on the search head, you can include the multikv command in your search to produce the same results: index=main sourcetype=lastlog | multikv You can also test multikv directly using sample data: | makeresults | eval _raw="USERNAME FROM LATEST user2 10.0.0.1 Wed Oct 30 11:20 another_user 10.0.0.1 Wed Oct 30 11:21 discovery 10.0.0.2 Tue Oct 29 22:19 scanner 10.0.0.3 Mon Oct 28 21:39 admin_user 10.0.0.4 Mon Oct 21 11:19 root 10.0.0.1 Tue Oct 1 08:57" | multikv If you have Splunk_TA_nix installed on your forwarders, your heavy forwarders, your indexers, and your search heads, then everything is "OK." Technically, your heavy forwarders are doing the parsing work instead of your indexers; however, if your indexers are Linux, you probably want to run lastlog.sh on them, too. Eventually, though, you'll realize that auditd, /var/log/auth.log, /var/log/secure, etc. are better sources of login data, although /var/log/wtmp and lastlog.sh are useful if you want regular snapshots of login times relative to wtmp retention on your hosts. You may find that log rotation for wtmp is either misconfigured or not configured at all on older Linux hosts.
Assuming s has already been extracted  | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as D... See more...
Assuming s has already been extracted  | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" | stats list(ConnectingIP) as ConnectingIP list(ReverseLookup) as ReverseLookup by Sender
Thanks @PickleRick I didn't see that and it wasn't (specifically extracted) in the search
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster.... See more...
Wait a moment. What exactly are you trying to do? Because it sounds as if you were trying to use WebUI to configure... inputs(?) in an app which you installed using WebUI somewhere over a SH cluster. Or are you doing something else?
@ITWhispererLook closer. There is an s=identifier pair in the event
It all depends how your fields are delimited/anchored. @marnall 's answer is obvious if you have just two or three words separated by spaces. If your "layout" is different, you have to adjust it.