Is there a way to set Show Current Time in a different time zone. I tried:
|eval _time=relative_time(now(),"+9h+30m")
but that did not work. any ideas or thoughts. This app works great.
Trying to understand what is the time field after tstats.
We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats?
...
See more...
Trying to understand what is the time field after tstats.
We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats?
for example
| tstats latest(var1) as var1 by var2 var3
| eval var4 = ………..
| stats latest(var4) by var3
Hi @Sandivsu ... more details pls.. how you came to know the queues are full.. any warning/error msgs.. is it a production system or dev/test system.. is there any license issues, warnings, etc...
See more...
Hi @Sandivsu ... more details pls.. how you came to know the queues are full.. any warning/error msgs.. is it a production system or dev/test system.. is there any license issues, warnings, etc..
Hi @thevikramyadav ... all the best for your splunk learning.. remember these 3 components... 1) Splunk Universal forwarder collects the logs and send it to Splunk indexer. 2) Splunk indexer,...
See more...
Hi @thevikramyadav ... all the best for your splunk learning.. remember these 3 components... 1) Splunk Universal forwarder collects the logs and send it to Splunk indexer. 2) Splunk indexer, indexes(ingests) the logs(it reads the logs, word by word, and write it down to flat files for searching) 3) Splunk Search head - its the webserver which provides the Splunk GUI login page, it reads the search requests from the users and send it to indexer. and collects the results from indexer, consolidates, reports it. From Splunk documentations: In a distributed search environment, a Splunk Enterprise instance that handles search management functions, directing search requests to a set of search peers and then merging the results back to the user. A Splunk Enterprise instance can function as both a search head and a search peer. A search head that performs only searching, and not any indexing, is referred to as a dedicated search head. Search head clusters are groups of search heads that coordinate their activities. Search heads are also required components of indexer clusters.
Thanks for your reply. I notice that almost every app uses script inputs (e.g., Splunk Add-on for Amazon Web Services, Splunk Add-on for Google Workspace, etc.). In what cases do I need to distribut...
See more...
Thanks for your reply. I notice that almost every app uses script inputs (e.g., Splunk Add-on for Amazon Web Services, Splunk Add-on for Google Workspace, etc.). In what cases do I need to distribute the app to my indexers?
My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly. I'm using the following currently over the last 30 day...
See more...
My org is pulling in vuln data using the Qualys TA and I am trying to put together a handful of searches and dashboards to see metrics quickly. I'm using the following currently over the last 30 days: index=qualys sourcetype=qualys:hostDetection SEVERITY=5 STATUS="FIXED"
| dedup HOST_ID, QID
| eval MTTR = ceiling(((strptime(LAST_FIXED_DATETIME, "%FT%H:%M:%SZ") - strptime(FIRST_FOUND_DATETIME, "%FT%H:%M:%SZ")) / 86400))
```| bucket span=1d _time```
| timechart span=1d avg(MTTR) as AVG_MTTR_PER_DAY
| streamstats window=7 avg(AVG_MTTR_PER_DAY) as 7_DAY_AVG This gets me close, but I believe this is giving the average of averages, not the overall average. Using the month of May, I wouldn't have a calculated value until May 8th, which would use the data from May 1-7. May 9th would be from May 2-8, etc. Any help on how to calculate the overall average?
1. Yes. Such apps should be installed on a heavy forwarder. 2. Some preparation may be necessary, depending on the app. Inputs.conf should be removed or all inputs disabled, for example.
Hi, I just installed a index cluster and i already know that i shoud place Apps to $SPLUNK_HOME/etc/master-apps/ directoty at my manager node to distribute it accross all indexers but i have 2 questi...
See more...
Hi, I just installed a index cluster and i already know that i shoud place Apps to $SPLUNK_HOME/etc/master-apps/ directoty at my manager node to distribute it accross all indexers but i have 2 questions. 1. If an app that I deployed on the indexers uses Python scripts to fetch data, will this data be duplicated? 2. Do I need to prepare an app before deploying it to my indexers (remove unnecessary dashboards, eventtypes, etc)? Or can i leave it without changes?
As @ITWhisperer says, "not working as expected", "doesn't work", etc., should be forbidden in this forum. More specifically, if your raw events contain things like "letterIdAndDeliveryIndicatorMap=[...
See more...
As @ITWhisperer says, "not working as expected", "doesn't work", etc., should be forbidden in this forum. More specifically, if your raw events contain things like "letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E]", Splunk's default extraction should have given you abc, efg, HijKlmlo without you asking. (It also gives you a field letterIdAndDeliveryIndicatorMap.) If you do table *, what do you see? Here is an emulation | makeresults
| eval _raw="letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E]"
| extract
Hi @uagraw01 1) pls check if all good with license.. do you see any warnings/errors related to license? 2) On the forwarder, pls check this: $SPLUNK_HOME/bin/splunk btool outputs list --debug 3)...
See more...
Hi @uagraw01 1) pls check if all good with license.. do you see any warnings/errors related to license? 2) On the forwarder, pls check this: $SPLUNK_HOME/bin/splunk btool outputs list --debug 3) On the indexer, pls check this: $SPLUNK_HOME/bin/splunk btool inputs list --debug (if $SPLUNK_HOME not setup properly, then add the exact path, like /opt/splunk) 4) from the UF, try to ping the indexer 5) from the UF, pls try to telnet the indexer at the receiving port
Permissions seem to be fine, and the deleted users do not show up in the passwd file. However, the users still show up in the GUI and when I run list user
Hi, Could someone please suggest an alternative product for Splunk Business Flow, as this particular product was deprecated post 2020? If there is no single product that provides the same functional...
See more...
Hi, Could someone please suggest an alternative product for Splunk Business Flow, as this particular product was deprecated post 2020? If there is no single product that provides the same functionality, is there a different way to monitoring business flows? thanks, pradeep.
You should be able to use the split function after extracting which will convert it to a MV field and then utilize a stats against that MV field. Something like this <base_search>
| rex fie...
See more...
You should be able to use the split function after extracting which will convert it to a MV field and then utilize a stats against that MV field. Something like this <base_search>
| rex field=_raw "letterIdAndDeliveryIndicatorMap=\[(?<letterIdAry>[^\]]+)"
| eval
letterIdAry=split(letterIdAry, ","),
letterIdAry=case(
mvcount(letterIdAry)==1, trim(letterIdAry, " "),
mvcount(letterIdAry)>1, mvmap(letterIdAry, trim(letterIdAry, " "))
)
| stats
count as event_count
by letterIdAry Example output:
yes, found that my regex had a space between ]], once fixed, was able to extract them as "abc=P, efg=P, HijKlmno=E" , thanks. next trying to get stats on count of abc=P.