All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Confirm the alerts are firing | rest splunk_server=local /serviceNS/-/-/alerts/fired_alerts Check the _internal index for errors sending alerts to your email provider. index=_internal "sendemail" ... See more...
Confirm the alerts are firing | rest splunk_server=local /serviceNS/-/-/alerts/fired_alerts Check the _internal index for errors sending alerts to your email provider. index=_internal "sendemail" Check with your email provider/admin to see what is happening to the alerts before they get your mailbox.  As always, check your Spam folder.
Since you do not have a Deployment Server (they're optional), leave the "Deployment host" box empty when installing the UF.  Put your Splunk Enterprise IP address in the "Receiving host" box.  In Spl... See more...
Since you do not have a Deployment Server (they're optional), leave the "Deployment host" box empty when installing the UF.  Put your Splunk Enterprise IP address in the "Receiving host" box.  In Splunk Enterprise, enable data reception by going to Settings->Forwarding and Receiving and clicking on "Configure receiving".
Thanks, I tried this but it only seems to list results that ocurred between 00:00 and 00:15 despite the search being "15 minutes ago"
I am noob in SOC, currently started learning. I use Splunk for practice.. So recently, I installed Universal Forwarder in my VMWare Windows 10 OS and Splunk enterprise in my main desktop. During univ... See more...
I am noob in SOC, currently started learning. I use Splunk for practice.. So recently, I installed Universal Forwarder in my VMWare Windows 10 OS and Splunk enterprise in my main desktop. During universal forwarder installation, I gave my main PC ip in the Deployment host & VM windows ip in Receiving host option (Port was default).  But after installing, my forwarder management is not showing any client in there in Splunk. I tried my main ip in both., changed my IP to static to dynamic, chose Bridged & NAT both network option in VM. Nothing is working. Splunk is not connecting to client. Need urgent help to start practicing!! Hoping the actual solution. Thanks in advance.
this default alert does not work,  | rest splunk_server_group=dmc_group_license_master /services/licenser/pools this does not even return any results. Anyone having ideas?
  | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="C... See more...
  | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="CONN FAILED", 1, 0))) as error sum(eval(if(FINAL=="MEND FAIL", 1, 0))) as warn avg(QUE_DEP) as label by QUE_NAM | rename QUE_NAM as to | eval from="internal", label="Avg: ".label." Good: ".good." Warn: ".warn." Error: ".error | append [| makeresults format=csv data="queue_name,current_depth BAR_Q,1 BAZ_R,2" | bin _time span=10m | stats avg(current_depth) as label by queue_name | rename queue_name as to | eval from="external", label="Avg: ".label | appendpipe [ stats values(to) as from | mvexpand from | eval to="internal" ]] How to add different icons for each one in the flow map viz. Please help me on that. Thanks in advance. 
Hi @RanjiRaje , don't use scheduling real time. Ciao. Giuseppe
Hi @jacknguyen, yes, it should be right, what's the problem? Ciao. Giuseppe
Hi @michaelteck , did you tried: [monitor:///data/mft/efs/logs/*/mft_flow.log] disabled=false sourcetype=log4j host=test-aws-lambda-splunk-code followTail=0 index=test_filtre ? Ciao. Giuseppe
Try something like this (note that if your time range spans midnight, then you will have to do something else with the bin _time) index=my_index eventStatus=fault [| makeresults | eval row=mvran... See more...
Try something like this (note that if your time range spans midnight, then you will have to do something else with the bin _time) index=my_index eventStatus=fault [| makeresults | eval row=mvrange(0,2) | mvexpand row | addinfo | eval earliest=relative_time(info_min_time,(row*-1)."d") | eval latest=relative_time(info_max_time,(row*-1)."d") | table earliest latest] | bin _time span=1d | chart count by eventStatus _time | foreach 1* [eval diff=if(isnull(diff),'<<FIELD>>',abs((diff-'<<FIELD>>')/diff))] | where diff >0.15
If you search both time segments then work out which group the time belongs to, then compare the two See this example index=_audit (earliest=-1d@d latest=-1d@d+15m) OR (earliest=@d latest=@d+15m) |... See more...
If you search both time segments then work out which group the time belongs to, then compare the two See this example index=_audit (earliest=-1d@d latest=-1d@d+15m) OR (earliest=@d latest=@d+15m) | eval group=if(_time>relative_time(now(),"@d"), "Prev", "Current") | chart count over user by group | eval alert=if(Current > Prev * 1.15, 1, 0) So this sets group according to where _time sits then just chart over user and calculate excess  
Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: au... See more...
Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: auto, priority: auto Trigger alert when greater than 0 Throttling > window duration: 0 Response action > To:mymailid, priority: normal, Include: Link to alert, link to result, trigger condition, attach csv, Trigger time In this case, mail is not getting delivered regularly. If I try executing the same SPL query in search, it showing more than 300 rows result
Hi, did you find any solution for this issue. i am facing the same now
  It sounds like some sort of setting related to connection issue: A few thinsg to check: Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble E... See more...
  It sounds like some sort of setting related to connection issue: A few thinsg to check: Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble Ensure on Splunk HEC server, you have global settings enabled: Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. Some other aspects to check and troubleshoot: #Check if the Hec collector is healthy curl -k -X GET -u admin:mypassword https://MY_Splunk_HEC_SERVER:8088/services/collector/health/1.0 #Check if HEC stanzas with config are configured /opt/splunk/bin/splunk http-event-collector list -uri https://MY_Splunk_HEC_SERVER:8089 #Check the settings using btool /opt/splunk/bin/splunk cmd btool inputs list --debug http
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=... See more...
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=my_index eventStatus=fault | stats count by eventStatus Searching "Last 15 minutes", giving say 100 results, can I trigger an alert IF the same search in the same 15 minute timeframe 1 day ago is for example 10% higher or lower?   Thanks
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdir... See more...
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdirectories that we do not want to monitor. The log files that we want to monitor are in subdirectories and these subdirectories rotate every day. When MFT launches a flow for today, for exemple, it creates a sub-directory: /data/mft/efs/logs/2024-07-02/mft_flow.log  I created an inputs.conf file :    [default] _meta=env::int-test [monitor:///data/mft/efs/logs/*] disabled=false sourcetype=log4j host=test-aws-lambda-splunk-code followTail=0 whitelist=\d{4}-\d{2}-\d{2}\/.*\.log index=test_filtre    But I don’t get anything in my Splunk Enterprise. Anyone can help me ? 
I use this search | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb and get this result is this right? ... See more...
I use this search | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb and get this result is this right?  
Hi @LuísMSB, in the Community, you can find thousands of answers to this question! Anyway, you have two choices: create a lookup containing the perimeter to monitor, checks if an host sent logs ... See more...
Hi @LuísMSB, in the Community, you can find thousands of answers to this question! Anyway, you have two choices: create a lookup containing the perimeter to monitor, checks if an host sent logs in the last 30 days and didn't send in the last hour in the first case, you have to create a lookup called perimeter.csv and containing at least one column (host), then you can run a search like the following | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if instead you don't want to manage a lookup, you can use this search | tstats latest(_time) AS _time count WHERE index=* earliest=-30d@d latest=now BY host | eval period=if(_time<now()-3600,"previous","latest") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="previous" I prefer first solution because gives you more control. Ciao. Giuseppe
Hi @jacknguyen, this isn't the dashboard I indicated, becsuase you need the historic license consuption not the daily one, anyway, you have a configuration issue on your Monitoring Console, I hint t... See more...
Hi @jacknguyen, this isn't the dashboard I indicated, becsuase you need the historic license consuption not the daily one, anyway, you have a configuration issue on your Monitoring Console, I hint to open a case to Splunk Support for this, otherwise, you cannot solve your request. Ciao. Giuseppe