All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the ... See more...
Hi @zksvc , version 4.38.0 is compatible with Splunk 9.1.x, 9.2.x and 9.3.x versions. Anyway, this app mainly gives you new Use Cases and eventual correct some old use cases, it's described in the documentation. ciao. Giuseppe
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do ... See more...
Hi PickleRick, Yes, I'm very aware that the structure I'm working with is really bad  . But let's say if I do pre-process on the data (currently data input is a modular input), how am I able to do the same on previous data that's already been indexed? Furthermore, the data is continuous and I'm only able to retrieve up to max 10 days behind (I can't change this unfortunately).  So, if I adjust the pre-processing and make the data structure into something that makes sense, this will only take effect for all future data. If I wanted to display the same table, the SPL won't work with older data in the same index. 
Hi @man03359 , I didn't used this app. Anyway, it uses a summary index (that didn't consume license) and a metric index, that could consume license, 150bytes for each event. About CPU and RAM, sur... See more...
Hi @man03359 , I didn't used this app. Anyway, it uses a summary index (that didn't consume license) and a metric index, that could consume license, 150bytes for each event. About CPU and RAM, surely it will use a part of them, the only way is to install it and monitor your resources. Ciao. Giuseppe
OK. I think I already told you about badly formed data. While in some cases you can argue which json structure will be better to represent your data, this one is clearly not a good approach. Especial... See more...
OK. I think I already told you about badly formed data. While in some cases you can argue which json structure will be better to represent your data, this one is clearly not a good approach. Especially for Splunk. Let's take this snippet: "enemy_information": ["name", "location", "powers" ], "enemy_information_values": [ [ "Doomsday", "Kryptonian Prison", [ "Super Strength", [...] "Immunity to Kryptonite"] ] [...] There is no structural relation between enemy_information and enemy_information_values. From Splunk's point of view those will parse out (leaving aside possibly nested multivalued fields which is not straightforward to deal with) as two separate multivalued fields with no relationship whatsoever between the values from one field and values from the other. If anything it should be either "enemy_attributes": {"name": "Doomsday". "location": "Seattle, WA" [...]}, or "enemy_attributes": [ {"name":"name", "value": "Doomsday"}, {"name": "location", "value":"Paris, France"} ...] Each option has its pros and cons but the one you're presenting only seems to have cons.
Additionally to what @richgalloway already said - you don't need to "convert" timestamps to another timezone. The timestamps are reported by source in some timezone (the timezone info might be includ... See more...
Additionally to what @richgalloway already said - you don't need to "convert" timestamps to another timezone. The timestamps are reported by source in some timezone (the timezone info might be included in the timestamp or not; if it is you can use it, if it is not you have to set it explicitly). But the timestamp as parsed out into the _time field will be stored as an "absolute" timestamp and will be shown in the UI using your user's defined timezone. So the same event will be shown at 14:39 if your user uses UTC or 16:39 if he uses CEST and so on. But the event's contents will remain the same.
One caveat - there are occasionally situations (especially with newly introduced features) that the .spec file does not contain proper entry. It doesn't happen often but it does happen sometimes.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
The time zone is included in the timestamp.  Tell Splunk about it and it will automatically convert the timestamp.   [cloudflare:json] disabled = false TIME_PREFIX = "EdgeStartTimestamp":" TIME_FOR... See more...
The time zone is included in the timestamp.  Tell Splunk about it and it will automatically convert the timestamp.   [cloudflare:json] disabled = false TIME_PREFIX = "EdgeStartTimestamp":" TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z MAX_TIMESTAMP_LOOKAHEAD = 20  
You are of course welcome to post a feedback to the docs page. I do that fairly often if I find something explained not clearly enough or in not enough detail. There's a little feedback form on th... See more...
You are of course welcome to post a feedback to the docs page. I do that fairly often if I find something explained not clearly enough or in not enough detail. There's a little feedback form on the bottom of each docs page.
Can someone pls help me with it  @gcusello 
Reasonable enough, and yes I do get incorrect linebreaker. So in this case the "fix" is to install the TA on the HF layer. Maybe it's just me bur this could be indicated with a bit more "urgency" in... See more...
Reasonable enough, and yes I do get incorrect linebreaker. So in this case the "fix" is to install the TA on the HF layer. Maybe it's just me bur this could be indicated with a bit more "urgency" in the documentation? The first table indicates "This add-on supports forwarders of any type for data collection. The host must run a supported version of *nix." which is not really the same as "needed for correct parsing of logs". The "Distributed deployment feature compatibility" does not even list HF, so while it is logical it is not really intuitive based on the documentation Thanks and all the best
There can be probably more than one way of doing that. Depending on your actual data (both what it looks like and it's volume characteristics) different ways may be the proper approach in terms of pe... See more...
There can be probably more than one way of doing that. Depending on your actual data (both what it looks like and it's volume characteristics) different ways may be the proper approach in terms of performance.
The input produces... something. It doesn't have a reasonable date, at least not one which could be used as "current" timestamp. And the whole output of lastlog should be ingested as one event (the L... See more...
The input produces... something. It doesn't have a reasonable date, at least not one which could be used as "current" timestamp. And the whole output of lastlog should be ingested as one event (the LINE_BREAKER is set to a never-matching pattern). That's why the DATETIME_CONFIG=current setting is in place - the event will always be indexed at the time it arrived at the indexer (or HF if there is one in the way). Since you don't have the TA_nix installed on the HF, Splunk does two things by default: 1) Breaks the whole lastlog output into separate events on the default LINE_BREAKER (which means every line is treated as separate event) 2) Since it doesn't have proper settings for timestamp extraction (by default there aren't any), it tries to guess where and what the timestamp within the event is. And it guesses completely wrongly. So yes, you should have the TA_nix (of course with inputs disabled if you don't need them there) installed on your HFs if they are in your ingestion path. As a rule of thumb - index-time settings are applied on the first "heavy" component in event's path so if you're having UF1->UF2->HF1->HF2->idx, the event is being parsed on HF1. And there is where you need to install the addons for index-time operations. Two caveats: 1) Ingest actions can happen after that. 2) If you use INDEXED_EXTRACTIONS, they happen immediately on the initial input component (so in our case - on UF1) and the data is sent downstream as parsed and not touched again before indexing (with the possible exception of ingest actions again).
So the reason for the observed error, only reported by the heavy forwarders, seems clear. The TA is not installed on the heavy forwarders. I might be missing some dependency but I fail to see where ... See more...
So the reason for the observed error, only reported by the heavy forwarders, seems clear. The TA is not installed on the heavy forwarders. I might be missing some dependency but I fail to see where this date would fit? The HF complains about the incorrectly genereated dates as being "out of bounds", but the output from the script does not contain "year" or a reasonable "timestamp" (as the year is missing). PRINTF='{printf "%-30s %-30.30s %-s\n", username, from, latest}' if [ "$KERNEL" = "Linux" ] ; then CMD='last -iw' # shellcheck disable=SC2016 FILTER='{if ($0 == "") exit; if ($1 ~ /reboot|shutdown/ || $1 in users) next; users[$1]=1}' # shellcheck disable=SC2016 FORMAT='{username = $1; from = (NF==10) ? $3 : "<console>"; latest = $(NF-6) " " $(NF-5) " " $(NF-4) " " $(NF-3)}' The incorrect years are not included in the indexed events, so it seems that there is little to gain by trying to "fix" this on the HF layer? The set-up is Universal Forwarder (TA nix) --> Heavy Forwarder () --> Indexer () --> Search Head (TA nix). The documentations seems to indicate installing the plugin on HF as "optional" (Install the Splunk Add-on for Unix and Linux - Splunk Documentation)? The intention would not be "data collection" from the HF server and I don't see that this would necessarily fix the issue? If these events from lastlog.sh should not contain years and do not contain a readable timestamp, then should this error not be automatically supressed by the HF?
Assuming "Source Host" contains the same values as server, you could try this | lookup ABC.csv server AS "Source Host"
Okay if like that, but did splunk 9.1.5 Compatible with 4.38.0 ? sorry because it's not my personal enviroment that's why i'm so careful before take action.   Ciao. Zake
Hi @ITWhisperer , The look up file contains the data like server OS and server environment (production or non -prod) which i will be needing in the search results along with the data coming for the m... See more...
Hi @ITWhisperer , The look up file contains the data like server OS and server environment (production or non -prod) which i will be needing in the search results along with the data coming for the mentioned index hence i have to fetch data from lookup file as well. I hope i cleared your doubt. 
Thank you very much @richgalloway !
Without much detail about your events, it is a little difficult to give detailed answers, so, in general terms, you could search both sources at the same time, then use eventstats to tag the events f... See more...
Without much detail about your events, it is a little difficult to give detailed answers, so, in general terms, you could search both sources at the same time, then use eventstats to tag the events from the second part of the search with the note from the first part of the search using the ip address to correlate the events. Then you can count the event from the second part of the search which have the note and those that don't
Hi @zksvc , I don't know which use cases you enabled, but it's possible, but anyway, app upgrade is a normal activity in ES. Ciao. Giuseppe