All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It seems like you're looking to pull browser and mail syslog data in Splunk, but you're facing several problems. To clarify your request: you want to know the correct method to track users who have v... See more...
It seems like you're looking to pull browser and mail syslog data in Splunk, but you're facing several problems. To clarify your request: you want to know the correct method to track users who have visited specific websites and made changes, correct?
- Recently AppDynamics has joined with Cisco to provide user identity (sign-in credentials) capabilities for all SAAS AppDynamics-based products and services. Users whose passwords are verified by th... See more...
- Recently AppDynamics has joined with Cisco to provide user identity (sign-in credentials) capabilities for all SAAS AppDynamics-based products and services. Users whose passwords are verified by the AppDynamics Identity Platform (not user accounts that sign in using their company’s SSO credentials) will be moved to the Cisco Customer Identity platform (id.cisco.com) for verification. - Every need to follow few steps mentioned in transition documents to see to help in successful login to controller.User to follow instructions mentioned in Documents below. User to follow instructions mentioned in Documents below. - https://community.appdynamics.com/t5/Knowledge-Base/AppDynamics-Identity-is-changing-to-Cisco-Identity/ta-p/53076
Hi @tuts , debug your situation: are you sure that the routes between endpoints and the syslog receiver are open? did you configured the syslog receiver as described in the above documentation (i... See more...
Hi @tuts , debug your situation: are you sure that the routes between endpoints and the syslog receiver are open? did you configured the syslog receiver as described in the above documentation (inputs)? did you disabled the local firewall on the Splunk receiver? For syslog, instead of the syslog receiving inside Splunk (Splunk Network Inputs) I hint to use an rsyslog receiver that writes syslogs on a file and then with Splunk you can read these files Ciao. Giuseppe
Hello,   I have a dashboard and I'd like to add a submit button because if I change something the search is launch by automatically. I'd like to set everything first in the checkbox then the input... See more...
Hello,   I have a dashboard and I'd like to add a submit button because if I change something the search is launch by automatically. I'd like to set everything first in the checkbox then the input field then launch the search with a submit button. I've tried to add a button but in this case I'm not able to choose the other checkbox options, only the 'Any field'. Could you please help the modification?     <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="checkbox" token="tokcheck"> <label>Field</label> <choice value="Any field">Any field</choice> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <valueSuffix>=REPLACE</valueSuffix> <delimiter> OR </delimiter> <prefix>(</prefix> <suffix>)</suffix> <change> <eval token="form.tokcheck">case(mvcount('form.tokcheck')=0,"category",isnotnull(mvfind('form.tokcheck',"Any field")),"Any field",1==1,'form.tokcheck')</eval> <eval token="tokcheck">if('form.tokcheck'="Any field","REPLACE",'tokcheck')</eval> <eval token="tokfilter">if($form.tokcheck$!="Any field",replace($tokcheck$,"REPLACE","\"".$toktext$."\""),$toktext$)</eval> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <eval token="tokfilter">if($form.tokcheck$!="Any field",replace($tokcheck$,"REPLACE","\"".$toktext$."\""),$toktext$)</eval> </change> </input> </fieldset> <row> <panel> <event> <title>$tokfilter$</title> <search> <query>index=* $tokfilter$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>     Thank you very much in advance!
Thanks for the answer. The saved search is working just fine since this accident. For testing reason I created a new index and rerun the search with the relevant timeframe. It was working fine with ... See more...
Thanks for the answer. The saved search is working just fine since this accident. For testing reason I created a new index and rerun the search with the relevant timeframe. It was working fine with the test index. However when I rerun the search to send the missing events to the real destination index, nothing happens. The search gives results but these results don't show up in the destination index. I found this log event: 06-21-2024 09:55:08.916 +0200 INFO SavedSearchHistory - pruning saved search history for savedsearch_id=<my_user_name>;vpn;SUM - VPN - Logout events reason=user=<my_user_name> does not exist It looks like as if something happened to my user during this period.
The foreach command goes through each field listed in the foreach command, in this instance, fieldnames beginning with 1 followed by anything. The time values are all epoch times, which are the numbe... See more...
The foreach command goes through each field listed in the foreach command, in this instance, fieldnames beginning with 1 followed by anything. The time values are all epoch times, which are the number of seconds since the beginning of 1970. At present, these all start with 1. Eventually, in a about 9 years time, this will start with 2. So, within the subsearch of the foreach command (within the square brackets []), the <<FIELD>> value in the subsearch is replaced by the field name from the list. Since, in this case, this is a number, the <<FIELD>> is placed in single quotes '<<FIELD>>' to tell Splunk that it is to be interpreted as a field name (not a number).
That's why my solution uses addinfo which gives you the "earliest" and "latest" times from the timepicker
Thank you engineer for me sysmon it worked and I received endpoint data but syslog did not work I want to know the links that the user visits and remove them from network sources
Hi @michaelteck , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks, this seem to be producing something like what I am looking for. Can I ask, what is the significance of this? I don't really understand it '<<FIELD>>'  Thanks
Confirm the alerts are firing | rest splunk_server=local /serviceNS/-/-/alerts/fired_alerts Check the _internal index for errors sending alerts to your email provider. index=_internal "sendemail" ... See more...
Confirm the alerts are firing | rest splunk_server=local /serviceNS/-/-/alerts/fired_alerts Check the _internal index for errors sending alerts to your email provider. index=_internal "sendemail" Check with your email provider/admin to see what is happening to the alerts before they get your mailbox.  As always, check your Spam folder.
Since you do not have a Deployment Server (they're optional), leave the "Deployment host" box empty when installing the UF.  Put your Splunk Enterprise IP address in the "Receiving host" box.  In Spl... See more...
Since you do not have a Deployment Server (they're optional), leave the "Deployment host" box empty when installing the UF.  Put your Splunk Enterprise IP address in the "Receiving host" box.  In Splunk Enterprise, enable data reception by going to Settings->Forwarding and Receiving and clicking on "Configure receiving".
Thanks, I tried this but it only seems to list results that ocurred between 00:00 and 00:15 despite the search being "15 minutes ago"
I am noob in SOC, currently started learning. I use Splunk for practice.. So recently, I installed Universal Forwarder in my VMWare Windows 10 OS and Splunk enterprise in my main desktop. During univ... See more...
I am noob in SOC, currently started learning. I use Splunk for practice.. So recently, I installed Universal Forwarder in my VMWare Windows 10 OS and Splunk enterprise in my main desktop. During universal forwarder installation, I gave my main PC ip in the Deployment host & VM windows ip in Receiving host option (Port was default).  But after installing, my forwarder management is not showing any client in there in Splunk. I tried my main ip in both., changed my IP to static to dynamic, chose Bridged & NAT both network option in VM. Nothing is working. Splunk is not connecting to client. Need urgent help to start practicing!! Hoping the actual solution. Thanks in advance.
this default alert does not work,  | rest splunk_server_group=dmc_group_license_master /services/licenser/pools this does not even return any results. Anyone having ideas?
  | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="C... See more...
  | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="CONN FAILED", 1, 0))) as error sum(eval(if(FINAL=="MEND FAIL", 1, 0))) as warn avg(QUE_DEP) as label by QUE_NAM | rename QUE_NAM as to | eval from="internal", label="Avg: ".label." Good: ".good." Warn: ".warn." Error: ".error | append [| makeresults format=csv data="queue_name,current_depth BAR_Q,1 BAZ_R,2" | bin _time span=10m | stats avg(current_depth) as label by queue_name | rename queue_name as to | eval from="external", label="Avg: ".label | appendpipe [ stats values(to) as from | mvexpand from | eval to="internal" ]] How to add different icons for each one in the flow map viz. Please help me on that. Thanks in advance. 
Hi @RanjiRaje , don't use scheduling real time. Ciao. Giuseppe
Hi @jacknguyen, yes, it should be right, what's the problem? Ciao. Giuseppe
Hi @michaelteck , did you tried: [monitor:///data/mft/efs/logs/*/mft_flow.log] disabled=false sourcetype=log4j host=test-aws-lambda-splunk-code followTail=0 index=test_filtre ? Ciao. Giuseppe
Try something like this (note that if your time range spans midnight, then you will have to do something else with the bin _time) index=my_index eventStatus=fault [| makeresults | eval row=mvran... See more...
Try something like this (note that if your time range spans midnight, then you will have to do something else with the bin _time) index=my_index eventStatus=fault [| makeresults | eval row=mvrange(0,2) | mvexpand row | addinfo | eval earliest=relative_time(info_min_time,(row*-1)."d") | eval latest=relative_time(info_max_time,(row*-1)."d") | table earliest latest] | bin _time span=1d | chart count by eventStatus _time | foreach 1* [eval diff=if(isnull(diff),'<<FIELD>>',abs((diff-'<<FIELD>>')/diff))] | where diff >0.15