All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There can be probably more than one way of doing that. Depending on your actual data (both what it looks like and it's volume characteristics) different ways may be the proper approach in terms of pe... See more...
There can be probably more than one way of doing that. Depending on your actual data (both what it looks like and it's volume characteristics) different ways may be the proper approach in terms of performance.
The input produces... something. It doesn't have a reasonable date, at least not one which could be used as "current" timestamp. And the whole output of lastlog should be ingested as one event (the L... See more...
The input produces... something. It doesn't have a reasonable date, at least not one which could be used as "current" timestamp. And the whole output of lastlog should be ingested as one event (the LINE_BREAKER is set to a never-matching pattern). That's why the DATETIME_CONFIG=current setting is in place - the event will always be indexed at the time it arrived at the indexer (or HF if there is one in the way). Since you don't have the TA_nix installed on the HF, Splunk does two things by default: 1) Breaks the whole lastlog output into separate events on the default LINE_BREAKER (which means every line is treated as separate event) 2) Since it doesn't have proper settings for timestamp extraction (by default there aren't any), it tries to guess where and what the timestamp within the event is. And it guesses completely wrongly. So yes, you should have the TA_nix (of course with inputs disabled if you don't need them there) installed on your HFs if they are in your ingestion path. As a rule of thumb - index-time settings are applied on the first "heavy" component in event's path so if you're having UF1->UF2->HF1->HF2->idx, the event is being parsed on HF1. And there is where you need to install the addons for index-time operations. Two caveats: 1) Ingest actions can happen after that. 2) If you use INDEXED_EXTRACTIONS, they happen immediately on the initial input component (so in our case - on UF1) and the data is sent downstream as parsed and not touched again before indexing (with the possible exception of ingest actions again).
So the reason for the observed error, only reported by the heavy forwarders, seems clear. The TA is not installed on the heavy forwarders. I might be missing some dependency but I fail to see where ... See more...
So the reason for the observed error, only reported by the heavy forwarders, seems clear. The TA is not installed on the heavy forwarders. I might be missing some dependency but I fail to see where this date would fit? The HF complains about the incorrectly genereated dates as being "out of bounds", but the output from the script does not contain "year" or a reasonable "timestamp" (as the year is missing). PRINTF='{printf "%-30s %-30.30s %-s\n", username, from, latest}' if [ "$KERNEL" = "Linux" ] ; then CMD='last -iw' # shellcheck disable=SC2016 FILTER='{if ($0 == "") exit; if ($1 ~ /reboot|shutdown/ || $1 in users) next; users[$1]=1}' # shellcheck disable=SC2016 FORMAT='{username = $1; from = (NF==10) ? $3 : "<console>"; latest = $(NF-6) " " $(NF-5) " " $(NF-4) " " $(NF-3)}' The incorrect years are not included in the indexed events, so it seems that there is little to gain by trying to "fix" this on the HF layer? The set-up is Universal Forwarder (TA nix) --> Heavy Forwarder () --> Indexer () --> Search Head (TA nix). The documentations seems to indicate installing the plugin on HF as "optional" (Install the Splunk Add-on for Unix and Linux - Splunk Documentation)? The intention would not be "data collection" from the HF server and I don't see that this would necessarily fix the issue? If these events from lastlog.sh should not contain years and do not contain a readable timestamp, then should this error not be automatically supressed by the HF?
Assuming "Source Host" contains the same values as server, you could try this | lookup ABC.csv server AS "Source Host"
Okay if like that, but did splunk 9.1.5 Compatible with 4.38.0 ? sorry because it's not my personal enviroment that's why i'm so careful before take action.   Ciao. Zake
Hi @ITWhisperer , The look up file contains the data like server OS and server environment (production or non -prod) which i will be needing in the search results along with the data coming for the m... See more...
Hi @ITWhisperer , The look up file contains the data like server OS and server environment (production or non -prod) which i will be needing in the search results along with the data coming for the mentioned index hence i have to fetch data from lookup file as well. I hope i cleared your doubt. 
Thank you very much @richgalloway !
Without much detail about your events, it is a little difficult to give detailed answers, so, in general terms, you could search both sources at the same time, then use eventstats to tag the events f... See more...
Without much detail about your events, it is a little difficult to give detailed answers, so, in general terms, you could search both sources at the same time, then use eventstats to tag the events from the second part of the search with the note from the first part of the search using the ip address to correlate the events. Then you can count the event from the second part of the search which have the note and those that don't
Hi @zksvc , I don't know which use cases you enabled, but it's possible, but anyway, app upgrade is a normal activity in ES. Ciao. Giuseppe
It is not clear how the content of the lookup relate to the fields in your search - please can you expand a bit more?
Try this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | eval app=if(url_domain="www.perplexity.ai", ... See more...
Try this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_month | stats count by user app date_month | chart count by app date_month | sort app 0
Hello everyone ,  I have the below query which is fetching data for a particular index but i also want few fields from a look up file say ABC.csv and columns are 'Salary' and 'Date' from that . I am... See more...
Hello everyone ,  I have the below query which is fetching data for a particular index but i also want few fields from a look up file say ABC.csv and columns are 'Salary' and 'Date' from that . I am trying to fetch it but the data is coming as blank . Please help : index=*infra* metric_label ="Host : Reporting no data"  | bin span=6m@m metric_value as 6_min_data | stats      count(eval(metric_value=0)) as uptime      count(eval(metric_value=1)) as downtime      by 6_min_data, source_host | eval total_uptime = uptime*360 | eval total_dowtime = downtime*360 | eval total_uptime = if(isnull(total_uptime),0,total_uptime) | eval total_downtime = if(isnull(total_dowtime),0, total_dowtime) | eval avg_uptime_perc =  round((total_uptime/(total_uptime+total_downtime))*100 ,2) | eval avg_downtim_perc =  round((total_downtime/(total_uptime+total_downtime))*100,2) | eval total_uptime = tostring(total_uptime, "duration") | eval total_downtime = tostring(total_downtime, "duration") | rename "total_uptime" as "Total Uptime", "total_downtime" as "Total Downtime", avg_uptime_perc as "Average uptime in %", avg_downtim_perc as "Average Downtime in %" source_host as "Source Host" | table  "Source Host"  "Total Uptime" "Total Downtime" "Average uptime in %" "Average Downtime in %"
Hi @cherrypick , are you using INDEXED_EXTRACTIONS = JSON in your sourcetype? Ciao. Giuseppe
I am trying out this . I will let you know whether it worked ! Thanks .
Hi @BRFZ , you have only one solution, use it and maintain by yourself. Otherwise you should create your own custom add-on that's the same thing! Ciao. Giuseppe
Hi All, I have two queries which searches for users that use an app. The apps are not in the same fields which was why I had to split the queries. But now I want to join the queries to get the res... See more...
Hi All, I have two queries which searches for users that use an app. The apps are not in the same fields which was why I had to split the queries. But now I want to join the queries to get the results Query 1 index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" | table user, url_domain, date_month | stats count by user url_domain date_month  | chart count by url_domain date_month  | sort url_domain 0 Query 2 index=db_it_network sourcetype=pan*  app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | table user, app, date_month | stats count by user app date_month  | chart count by app date_month  | sort app 0 results example that I want App August July claude-base 123 120 google-gemini 124 42 openai 153 123 bing-ai-base 212 232 www.perplexity.com 14 12
Hello, I have successfully integrated Cloudflare with Splunk Enterprise using the pull method. This integration was set up on a Heavy Forwarder, so the logs are first received by the HF before being... See more...
Hello, I have successfully integrated Cloudflare with Splunk Enterprise using the pull method. This integration was set up on a Heavy Forwarder, so the logs are first received by the HF before being forwarded to the Indexers. While the integration itself is working correctly, I encountered an issue with the time zone in the logs. The API we are using requires the timestamps to be in UTC. As a result, when the API fetches the logs, the events are recorded in the UTC timezone. However, I need to convert these timestamps from UTC to UTC+5 (Pakistan Standard Time, PKT). Here is a sample log event from Cloudflare:   " --- EdgeEndTimestamp: 2024-08-26T09:07:43Z EdgeResponseBytes: 72322 EdgeResponseStatus: 206 EdgeStartTimestamp: 2024-08-26T09:07:43Z --- " We are extracting the EdgeStartTimestamp and using it for the _time field, but this timestamp is in UTC format. In my props.conf file on the Heavy Forwarder, I have the following configuration: [cloudflare:json] disabled = false TIME_PREFIX = \"EdgeStartTimestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19   I also tried adding the TZ setting to props.conf: [cloudflare:json] TZ = Asia/Karachi However, this didn't work because the events themselves contain timezone information (UTC), so the TZ setting doesn't have any effect. I then tried using TZ_ALIAS in props.conf: [cloudflare:json] TZ_ALIAS = Z=UTC+5 This didn't work either. Finally, I tried the following in props.conf, but it still didn't resolve the issue: [cloudflare:json] EVAL-_time = _time + 5*3600   Any help would be appreciated.
Hi @gcusello Thanks for your reply when i check my "ES Content Updates" is version 4.0.0 and it available Update to 4.38.0 I have question if I update it, will it affect other use cases that have bee... See more...
Hi @gcusello Thanks for your reply when i check my "ES Content Updates" is version 4.0.0 and it available Update to 4.38.0 I have question if I update it, will it affect other use cases that have been enabled? Or will it be safe for other use cases?
Hi, I am currently learning Splunk and trying to set up for myself on my local machine. I am looking at the Splunk BOTS v2 guide and can see there are a number of apps to be added. There is one ap... See more...
Hi, I am currently learning Splunk and trying to set up for myself on my local machine. I am looking at the Splunk BOTS v2 guide and can see there are a number of apps to be added. There is one app which I am unsure how to download and add to the web gui as there are no links to download. App: Collectd App for Splunk Enterprise https://splunkbase.splunk.com/app/2875/ Upon visiting the site (to GitHub), I am presented with some instructions to configure things which is a little confusing for new starters, but also not able to see the app download link. Am I missing something here or it's just no longer relevant for V2? I am not using any forwarders, indexers etc, just one host to try set this up.   Thanks.
I managed to add extra python modules to Python for Scientific Computing by building it from source. In my case I added xgboost for Linux (64-bit) Cloned the GitHub repo: https://github.com/splunk/... See more...
I managed to add extra python modules to Python for Scientific Computing by building it from source. In my case I added xgboost for Linux (64-bit) Cloned the GitHub repo: https://github.com/splunk/Splunk-python-for-scientific-computing Just for increased stability I checked out the latest available git tag (currently 4.2.1, this step might not be necessary) I then added in environment.nix.yml the python module that I want (in my case: - xgboost=2.1.1) Afterwards, followed the readme and run: make freeze make build make dist Finally copied the tarball from the build directory to the user_apps directory on Splunk (substituting the existing Python for Scientific Computing app if already installed). When using Python for Scientific Computing copy the exec_anaconda and util python scripts to the bin directory of your app. Also copy the lib/splunklib from Python for Scientific Computing to the bin directory of your app. Add these lines to the start of your script:   import exec_anaconda exec_anaconda.exec_anaconda()