Hi @richa, Since you asked for alerting data sources that stopped for more than 24 hours, it will not show yesterday's logs. You can change the delay parameter according to your needs. 86400 is equivalent to 24 hours in seconds.
... View more
Hi @uagraw01, If there are too many files in that folder you can try adding "ignoreOlderThan" setting in monitor stanza; [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*]
disabled = false
index = scada
host = WALVAU-SCADA-1
sourcetype = cm_scada_xml
ignoreOlderThan = 24h
... View more
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like | search NOT message="*Failed Processing Concur*"
... View more
Hi @Mrig342, You can use below eval function; | eval Used_Space=case(match(Used_Space,"M"),round(tonumber(replace(Used_Space,"M",""))/1024,2)."G",1=1,Used_Space)
... View more
Hi @bgill0123, You should run your search using a time range of 15 days or more to be able to compare and use "delta" command like below; (assuming you 5 days average hit count field is "five_days_avg_count") | delta five_days_avg_count as diff
| eval perc_diff=abs(diff*100/five_days_avg_count)
| search perc_diff > 10
... View more
Hi @Skeer-Jamf, This is out of Splunk context but you can check Linux Keepalived service for redundancy. Keepalived supports active/passive failover mode and load balancing setup is possible. It creates and manages virtual IP address that forwards the incoming traffic to healthy backend servers.
... View more
Hi @iamsplunker0415, You can use "date_hour" field for filtering hours, please try below sample; index=your_index USA="Washington" NOT date_hour IN (2,3)
... View more
You can see the index of the source by using below query; | tstats latest(_indextime) as latest where index IN (index1,index2,index3,index4) earliest=-48h by source index
| eval delay = now() -latest
| where delay > 86400
| eval delay=tostring(delay, "duration")
| fields - latest Since above query check the latest 48 hours ingested events. Filters the results that do not send data for at least 24 hours. Looking for 48 hours back will make sure daily updated sources will taken into account.
... View more
Hi @richa, You can use the below query by updating your 4 indexes, it will check all sources that send data in 48 hours but stopped more than 24 hours. | tstats latest(_indextime) as latest where index IN (index1,index2,index3,index4) earliest=-48h by source
| eval delay = now() -latest
| where delay > 86400
| eval delay=tostring(delay, "duration")
| fields - latest
... View more
Hi @saskn, If the query works when Operation!="Disable Strong Authentication.", it shows no user disabled MFA. Normally, you have no results if all users are using MFA.
... View more
Hi @aniketsamudra You should use case statement like below; | eval Test=case(like('thrown.extendedStackTrace',"%403%"),"403", like('thrown.extendedStackTrace',"%404%"),"404",1=1,"###ERROR####")
... View more
Hi @smanojkumar, Since you are passing arguments using comma delimiter, it seems does not match the macro definition. Solution depends on your macro search definition, you can update your macro search definition for using OR, you can pass multiple values delimited by " OR " (with spaces) Like below; </input>
<input type="multiselect" token="machine" searchWhenChanged="true">
<label>Machine type</label>
<choice value="*">All</choice>
<choice value="VDI">VDI</choice>
<choice value="Industrial">Industrial</choice>
<choice value="Standard">Standard</choice>
<choice value="MacOS">MacOS</choice>
<choice value="**">DMZ</choice>
<default>*</default>
<initialValue>*</initialValue>
<prefix> (</prefix>
<suffix> )</suffix>
<delimiter> OR </delimiter>
<change>
<condition match="$label$ == "*DMZ*"">
<set token="machine_type_dmz">"mcafee_DMZ=DMZ"</set>
</condition>
<condition match="$label$ != "*DMZ*"">
<unset token="machine_type_dmz"></unset>
</condition>
</change>
</input>
... View more
Hi @Niro, If your issue isn't resolved, it might happen because of sourcetype overwrite on pan logs. pan:traffic is overridden sourcetype please try putting the transforms setting to your original sourcetpe. It should be pan:log or pan_log according to your input setting. [pan:log]
TRANSFORMS-pan_user = pan_src_user
... View more
Hi @Stives, This happens when Java is not installed or the Java path is not correctly configured. Did you try restarting Splunk's service? Sometimes it helps. Or maybe there is a change on your Java installation because of an OS update, etc.
... View more
Hi @adrojis, Did you run set_permissions.sh on your forwarder? You should have done it manually on the UF host. cd $SPLUNK_HOME/etc/apps/Splunk_TA_stream
sudo chmod +x ./set_permissions.sh
sudo ./set_permissions.sh Install Splunk Add-on for Stream Forwarder
... View more
Hi @Chirag812, Splunk manages retention on a per bucket basis. This means to freeze a bucket, the newest data in a particular bucket must be older than frozenTimePeriodInSecs. Normally all data have close timestamps in a bucket. But if some of your sources send data using old timestamps, these data will go into the same bucket as the recent timestamps. This makes the bucket's oldest timestamp older than the others. That is why you see the above situations. Unfortunately, there is no method to fix this error until the newest data is older than frozenTimePeriodInSecs. To prevent this behavior in the future, you can check your data sources for problems below. - Always use healthy NTP servers for all your data sources to be sure they have correct timestamps - Check timestamp extraction problems and use TIME_PREFIX and TIME_FORMAT settings to prevent getting the wrong part of the log as a timestamp. If there are some epoch-like patterns in your data Splunk could use this as a timestamp. You can use the below query to see the wrong timestamped events to fix. index=ABC earliest=1 latest=-63d
... View more
Hi @CarolinaHB, JSON format should be only valid JSON string. If you can send log by removing first "Feb 5 18:50:30 10.0.30.81" then it should be shown as a JSON.
... View more
Hi @oussama1 , I think your problem is because of regex output. If some of your flags have no associated values from regex output, it is not possible to match flag and values. Maybe you should change your regex to create an output that can be parsed by SPL. If you have an anonymous sample events , we can try to help.
... View more
Hi @Haleem, Please try below; index=xxxx source=*xxxxxx*
| stats avg(responseTime), max(responseTime), count(eval(respStatus >=500)) as "ERRORS", count(eval(respStatus >=400 AND respStatus <500)) as "EXCEPTIONS", count(eval(respStatus >=200 AND respStatus <400)) as "SUCCESS" by client_id servicePath
... View more
Hi @oussama1, You can add fillnull command after your rex command; | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?"
| fillnull value="true" flag value
... View more