All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, Trying to create a dashboard in studio with dynamic coloring elements for single value searches (ie traffic lights) When I go to the Coloring > Dynamic Elements dropdown to select Background... See more...
Hi All, Trying to create a dashboard in studio with dynamic coloring elements for single value searches (ie traffic lights) When I go to the Coloring > Dynamic Elements dropdown to select Background it will not select. A click just has it disappear and show none on the dropdown. Search works fine and displays the correct value. Running enterprise on prem version 9.0.0 BUILD 6818ac46f2ec. Have not found anything on the net or here to suggest this version has an issue. Looking to update to 9.1.1 but as this is production that is a planned exercise. Search is  index=SEPM "virus found" |stats count(message) as "Infected hosts" and I have this traffic light in the normal dashboard build. Just trying studio see how it is.     Any help appreciated!
Hi Team, At present, SSL encryption is enabled between the Universal Forwarder (UF) and the Heavy Forwarder (HF), while communication from HF to Indexers occurs without SSL encryption. However, ther... See more...
Hi Team, At present, SSL encryption is enabled between the Universal Forwarder (UF) and the Heavy Forwarder (HF), while communication from HF to Indexers occurs without SSL encryption. However, there are plans to establish an SSL channel between the HF and Indexers in the future. Additionally, communication between Indexers and the License Master, as well as between HF and the License Master, currently operates through non-SSL channels. There is a requirement to transition these communications to SSL-enabled connections. Could you provide guidance or documentation outlining the necessary implementation steps for securing the communication from Indexers & HF to License Master to facilitate these changes?
Hi, Thanks for your response. My desired results: Item StartTime EndTime Duration A       02:05:00    02:15:00 00:10:00 B       02:15:05    02:25:05 00:10:00 B       02:40:05    02:45:05 00:05:... See more...
Hi, Thanks for your response. My desired results: Item StartTime EndTime Duration A       02:05:00    02:15:00 00:10:00 B       02:15:05    02:25:05 00:10:00 B       02:40:05    02:45:05 00:05:00 I had tried similar methods like your but got wrong results. Fail duration should be calculated from first fail to first success. Thus actual record count should be 3 instead of 5. Sorting may not be the root cause for my question. It seems that if there are 2 "fail" events, "transaction" commands generates 2 overlapped records. B      02:20:05(fail)     02:25:05(success)  00:05:00 B      02:15:05(fail)     02:30:05(success)  00:15:00 Time duration of first one is included in second one. 02:30:05(success) should not be considered as the end of fail event. 02:25:05(success) is the correct one.
I want to point out that these two warnings are breaking my jobs because on some machines I am using the splunkforwarder CLI to run query on the splunk cluster and export the result to files.   http... See more...
I want to point out that these two warnings are breaking my jobs because on some machines I am using the splunkforwarder CLI to run query on the splunk cluster and export the result to files.   https://docs.splunk.com/Documentation/Splunk/9.1.1/Search/ExportdatausingCLI These two extra warning lines were now written to the export files as well. I think it is ok for the CLI to print warnings, but the splunk CLI should follow the best practice and write these warnings to the stderr.  But it's writing them to the stdout, so that we can't use the standard practice of " 2> err.txt 1> export.csv" to handle warnings. Now I have to add these to ALL the script files which are running the splunkforwarder CLI, which is pretty ugly: " | grep -vi "warning:" > export.csv" Wish there is a flag to disable warnings, or the splunkforwarder CLI should at least write them to stderr instead of stdout.
Hi @cpuffe ... this can get mixed views depending on the person's Linux interests.  You can decide depending upon your budget and support team members Linux preferences.  RHEL is a good choice too.... See more...
Hi @cpuffe ... this can get mixed views depending on the person's Linux interests.  You can decide depending upon your budget and support team members Linux preferences.  RHEL is a good choice too.   For full list of supported OS, you can check this documentation.. thanks.  https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements  
If you use transaction (which I advise against) you need to correlate with the session id - as you can see in your rows 2 and 3, the session id ending in 93 is out of sync with your rows Generally t... See more...
If you use transaction (which I advise against) you need to correlate with the session id - as you can see in your rows 2 and 3, the session id ending in 93 is out of sync with your rows Generally the way to find these things is to use something like search.... | stats min(_time) as min max(_time) as max values(*) as * by cs_sessionid and in the stats, collect the values you want (instead of values(*) as *) You won't hit the limitations of transaction with large data sets which silently break your results.  
Although you cannot technically represent those dates pre 1970 as the internal _time field, you CAN use negative epoch times as strptime will work and correctly format the negative epochs as the corr... See more...
Although you cannot technically represent those dates pre 1970 as the internal _time field, you CAN use negative epoch times as strptime will work and correctly format the negative epochs as the correct time, but as you found, you cannot parse dates pre 1970, so you'd have to create your epochs through calculation. Splunk is not great with non _time values on the X-axis of timecharts, but you can chart over a string, where the string could be YYYYMM and it will render it correctly, you just don't get the dates on the x-axis. You can get the idea from this | makeresults count=3000 | streamstats c | eval _time=now() - (c * (86400 * 30)) | eval month=strftime(_time, "%Y-%m") | eval r=random() % 100 | chart avg(r) as r over month
Hi @gayathrc ...Pls check this "Getting Data in" Splunk document.. this gives the steps of monitoring a network input (TCP / UDP).  https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Monitornet... See more...
Hi @gayathrc ...Pls check this "Getting Data in" Splunk document.. this gives the steps of monitoring a network input (TCP / UDP).  https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Monitornetworkports   upvotes / karma points appreciated, thanks. 
I may not totally understand how imperva identifies unique events This query shows alot of confusing results. seems for every event our main site also gets a cs_sessionid which I was led to believe ... See more...
I may not totally understand how imperva identifies unique events This query shows alot of confusing results. seems for every event our main site also gets a cs_sessionid which I was led to believe was a unique identifier. AS you can see in the screenshot, the results are kina skewed. index=imperva sourcetype=imperva:waf (sc_action="REQ_CHALLENGE_CAPTCHA" OR sc_action="REQ_PASSED") s_computername=* | transaction maxspan=1m startswith="sc_action=REQ_CHALLENGE_CAPTCHA" endswith="sc_action=REQ_PASSED" | where sc_action="REQ_PASSED" OR sc_action="REQ_CHALLENGE_CAPTCHA" | eval human_readable_time=strftime(min(_time),"%Y-%m-%d %H:%M:%S") | mvexpand human_readable_time | table human_readable_time, s_computername, sc_action, c_ip, cs_sessionid | rename human_readable_time AS Date/Time, s_computername AS "Web Server", sc_action AS "Request Response", cs_sessionid AS "Client Session ID", c_ip AS "client IP"
| bin _time span=1mon | stats values(CN) as CN by _time | streamstats dc(CN) as unique | streamstats latest(unique) as previous current=f | fillnull value=0 previous | eval new=unique-previous
There are a number of ways to do this, but a simple approach is to do something like this search earliest=-2mon@mon latest=@mon | bin _time span=1mon | stats count by _time CN | stats dc(_time) as t... See more...
There are a number of ways to do this, but a simple approach is to do something like this search earliest=-2mon@mon latest=@mon | bin _time span=1mon | stats count by _time CN | stats dc(_time) as times values(_time) as _time by CN | eventstats dc(eval(if(times=1 AND _time=relative_time(now(), "-1mon@mon"), CN, null()))) as "New" dc(eval(if(times=1 AND _time=relative_time(now(), "-2mon@mon"), CN, null()))) as "Old" dc(eval(if(times=2, CN, null()))) as "Returning" but this will never class the first month users as new, it only compares last month with previous month, i.e. in this case October vs September - you can change the times to do October and partial November. If you want a different approach you can keep a lookup of users who are "known" and then simply look at the current month and lookup the user against the lookup. If they do not exist, they are new. You will also have to roll over the 'new' users for this month to the lookup at the end of the month
Essentially, you can't represent dates prior to 1970 as a timestamp. However, you could convert your dates to an integer, e.g. 1752-09-03 becomes 17,530,903 (except that particular date didn't exist!... See more...
Essentially, you can't represent dates prior to 1970 as a timestamp. However, you could convert your dates to an integer, e.g. 1752-09-03 becomes 17,530,903 (except that particular date didn't exist!), and 2023-11-13 becomes 20,231,113 etc. Obviously, this doesn't work if you want to use times as well, and you shouldn't save these in _time as that might be treated as an epoch time i.e. seconds since 1970-01-01.
My dataset has historical monthly average temperature for years 1745 to 2013. Since my source is a csv file, I used the following so the that the _time field represents the timestamp in each event : ... See more...
My dataset has historical monthly average temperature for years 1745 to 2013. Since my source is a csv file, I used the following so the that the _time field represents the timestamp in each event :   source="Global warming trends.zip:*" source="Global warming trends.zip:./GlobalLandTemperaturesByMajorCity.csv" Country=Canada City=Montreal dt=*-01-* AverageTemperature="*" | eval _time=strptime(dt,"%Y-%m-%d")   However, all the events dated 1970 and prior don't have their timestamp in the 'Time' column, as per the attached capture. I suspect this has do do with Epoch time, but how do I fix this so I can vizualize my entire data set in a line chart?
Your eval is wrong - you don't need IN search... | eval activity=case(sc_action="REQ_CHALLENGE_CAPTCHA", "captcha", sc_action="REQ_PASSED","passed", true(), sc_action) | stats count by activity but... See more...
Your eval is wrong - you don't need IN search... | eval activity=case(sc_action="REQ_CHALLENGE_CAPTCHA", "captcha", sc_action="REQ_PASSED","passed", true(), sc_action) | stats count by activity but that will just give you counters of each - are you looking to relate that to a user or IP and should one event follow the other - if so, that's not enough.
Change the search to add in to the user and destination so it's captured, e.g. | tstats summariesonly=t allow_old_summaries=t count values(Authentication.dest) as dest values(Authentication.user) as... See more...
Change the search to add in to the user and destination so it's captured, e.g. | tstats summariesonly=t allow_old_summaries=t count values(Authentication.dest) as dest values(Authentication.user) as user from datamodel=Authentication by Authentication.action, Authentication.src | rename Authentication.* as * | chart last(count) values(dest) as dests values(user) as users over src by action i.e. change the first 3 lines to add in the values - not also the wildcard rename You can add more fields from the Authentication datamodel if you need more information
You can approach it with something like this (index=1 sourcetype="abc" "s1 event received" and "s2 event received" and "s3 event received") OR (index=2 sourcetype="xyz" "created") | rex "(?<e_type... See more...
You can approach it with something like this (index=1 sourcetype="abc" "s1 event received" and "s2 event received" and "s3 event received") OR (index=2 sourcetype="xyz" "created") | rex "(?<e_type>s.) event received for (?<customer>\d+)" | rex "(?<created>created) for (?<customer>\d+)" | stats max(eval(if(e_type="s3",_time, null()))) as last_e_type max(eval(if(created="created", _time, null()))) as created_time dc(e_type) as e_types values(created) as created by customer | addinfo | where e_types=3 AND (created_time-last_e_type > 300 OR (isnull(created_time) AND info_max_time - last_e_type > 300) so you search for both data sets and extract the customer using rex from the event types. It also extracts the event type (s1/s2/s3) into e_type. It then calculates the number of e_types and the relevant times, i.e. last_e_type is the time of the s3 event and created_time is the time of the created event. Then the final where clause will require all 3 sX events to have been received and if the created time is more than 5min after the s3 or if there is no created event seen it will drop through. Note that your time window for your search should allow for s1/s2/s3 AND created to be in the same dataset because if you run the search and it only sees s1 and nothing else and in the next search it sees s2 and s3 and no created, it will not alert. So maybe the search should be set to run every 5 minutes and to look at a 10 minute window, e.g. from -10m@m  to -5m@m      
we had a vendor setup a Splunk instance for us a while ago and one of the things they did was setup a Brute Force attack alert using the following search, | tstats summariesonly=t allow_old_summarie... See more...
we had a vendor setup a Splunk instance for us a while ago and one of the things they did was setup a Brute Force attack alert using the following search, | tstats summariesonly=t allow_old_summaries=t count from datamodel=Authentication by Authentication.action, Authentication.src | rename Authentication.src as source, Authentication.action as action | chart last(count) over source by action | where success>0 and failure>20 | sort -failure | rename failure as failures | fields - success, unknown Now this seems to work OK as I'm getting regular alerts, but these alerts contain little if any detail. Sometimes they contain a server name, so I've checked that server. I can see some failed login attempts on that server, but again, not detail. No account details, not IPs, no servers names. It may be some sort of scheduled task as i get an alert from Splunk every hour and every time it has about the same number of Brute Force attacks (24). But I can't see any scheduled tasks that may cause this. Does anyone have any suggestions on how to track down what is causing these false alerts ?
Im trying to get specific results if two values in the same field are true but I keep failing I want to count the number of times a  sc_action=REQ_PASSED when sc_action=REQ_CHALLENGE_CAPTCHA was req... See more...
Im trying to get specific results if two values in the same field are true but I keep failing I want to count the number of times a  sc_action=REQ_PASSED when sc_action=REQ_CHALLENGE_CAPTCHA was required   I tried this : My search | eval activity=if(IN(sc_action, "REQ_CHALLENGE_CAPTCHA", "REQ_PASSED")"passed","captcha") | stats count by activity I tried if/where and evals, I either get get an error or I get all the results where both are true. Maybe im overthinking it
This worked for me when I was testing on a personal Windows laptop, but the official system I use is 2015 Windows 10 Pro, which is much older. I had to download an older 7.2.10 version of Splunk Univ... See more...
This worked for me when I was testing on a personal Windows laptop, but the official system I use is 2015 Windows 10 Pro, which is much older. I had to download an older 7.2.10 version of Splunk Universal Forwarder for it to even download. The logs are  being forwarded, but when I add the index line, nothing changes and the search for that index comes up empty. Could this be due to using an older universal forwarder version? Is there a different way to assign an index?
What is your data that contains the info needed to produce this report and what have you tried so far?