All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming your timepicker is called timepicker and you want to use sTime to filter your events, try something like this index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_S... See more...
Assuming your timepicker is called timepicker and you want to use sTime to filter your events, try something like this index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_START,NAME,DESCR | eval sTime=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | where relative_time(now(),$timepicker.earliest$) <= sTime AND relative_time(now(),$timepicker.latest$) > sTime
The name of the field might come from the log but the name of the token doesn't have to match, if you can edit the dashboard, you can change the name of the token.
So I have a stand alone splunk instance with only data that is imported from botsv3 and I used these instructions for  Adjusting Splunk memory in settings: You can allocate more memory to Splunk by a... See more...
So I have a stand alone splunk instance with only data that is imported from botsv3 and I used these instructions for  Adjusting Splunk memory in settings: You can allocate more memory to Splunk by adjusting the settings in the limits.conf file. Locate this file in the Splunk installation directory and modify the max_mem setting to allocate more memory. This file typically resides in SPLUNK/etc/system/local/limits.conf.  I changed max_mem = <new value>MB And so far changing the max_mem from the original 200 mb to 6,144 mb to make it 6gb for splunk to use, it seems like I do not have the bad allocation issue anymore. I will continue monitoring for the error and update my comment if I run into the bad allocation error again. This solution may not work for everyones specific situation especially since you may enter an organization and the memory allocation has already been configured and you may not have permissions to change any configurations but if you are working just with a home lab and you are making your own configurations as the splunk admin this is a good place to start. Since none of the solutions seem to actually provide steps on how to make the actual adjustments for people that are learning I figured I would include some descriptive steps to this discussion so people can contribute their expertise for people that are learning. Please build on the discussion with actionable steps instead of replying that this solution may not work so people can actually learn what the solution steps are.
Hi All, I have a search query that allows me to pull results from an index summary. One of the fields is a time/date field. The data is pull from a database and is a schedule so the time in this f... See more...
Hi All, I have a search query that allows me to pull results from an index summary. One of the fields is a time/date field. The data is pull from a database and is a schedule so the time in this field is not the indexed field. I would like to search on the time field and have the below query which allows me to do this. However i would like to move this into a dashboard and have a timepicker. Is this possible to do this? I need to have a time picker to grab the correct index summary data, then again for the field.   index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_START,NAME,DESCR | eval sTime=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | sort 0 -sTime | eval eventday=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | bucket eventday span=1d | eval eventday=strftime(eventday,"%Y-%m-%d") | eval eventday1=strptime(eventday,"%Y-%m-%d") | eval min_Date=strptime("2023-10-11","%Y-%m-%d") | eval max_Date=strptime("2023-10-14","%Y-%m-%d") | where (eventday1 >= min_Date AND eventday1 < max_Date) | eval record=substr(CODE, -14, 1) | eval record=case(record==1,"YES", record==0,"NO") | stats count(eval(record="YES")) as events_record count(record) as events by NAME | eval percentage_record=(events/events_record)*100 | fillnull value=0 percentage_record | search percentage_record<100 | sort +percentage_record -events      
There's no OOTB feature, rather you can add tag/flag values in the search results itself and individual team members can just filter based on the flag. Let me know if you have any questions / though... See more...
There's no OOTB feature, rather you can add tag/flag values in the search results itself and individual team members can just filter based on the flag. Let me know if you have any questions / thoughts?
Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with... See more...
Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with the following result ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) 1.1.1.1 3 2 7 23 2.2.2.2 3 1 4 10 After adding "stats values(score) as score by ip vuln"  above the current stats ip, count(vuln) no longer calculated the count of non distinct/original vuln   (7=>3,  4=>3) sum(score) no longer calculated the count of non distinct/original score  (23=>10, 10=>5) ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 *3 *10 10 2.2.2.2 3 1 *3 *5 5 This is what I would like to have  ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 7 23 10 2.2.2.2 3 1 4 10 5
This name already comes from OKTA logs with dot, unfortunately I wont be able to change it. Need to work with what I have. Thank you for help! A appreciate it!
I tried that but it gives a blank box.  No data. 
Hello, we are trying to work out how much data our Splunk instances search through on average. so we've written a search that tells us our platform is running 75-80,000 searches a day, this would be... See more...
Hello, we are trying to work out how much data our Splunk instances search through on average. so we've written a search that tells us our platform is running 75-80,000 searches a day, this would be only a few manual searches and the rest coming from saved / correlation searches. Is there anywhere in the system or a search we can write that would say for instance these 75,000 searches, searched through a total of 750gb of data...  We are researching the possibility of moving to a platform that costs per search, so if we can get these figures we can see how much a like for like replacement would actually cost.
That lists all USer IDs that have over 10 disconnects.  I need the total number of users that have disconnected in that time frame.  I essentially need to add the number of USER IDs that have over 10... See more...
That lists all USer IDs that have over 10 disconnects.  I need the total number of users that have disconnected in that time frame.  I essentially need to add the number of USER IDs that have over 10. Just one number. 
$actor.displayName|s$ Having said that, you should probably avoid using dot in names where possible, so perhaps name your token as actorDisplayName and use $actorDisplayName|s$
The stats command can count the number of disconnects for each user.  Then filter out users with fewer than ten disconnects. index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED... See more...
The stats command can count the number of disconnects for each user.  Then filter out users with fewer than ten disconnects. index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=host2) OR host=Host1) earliest=$time_tok.earliest$ latest=$time_tok.latest$ | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | stats count by IONS | where count >= 10 | rename IONS as "User ID"  
Hi, I'm trying to utilize the new feature as adding custom field in Asset & Identity Framework but I'm getting a error after adding the new field.   Thanks for your help!!..  
Thank you for your advice, in this case if my token name is for example "actor.displayName" in this case in the main query in need to wrap it like this? :  $"actor.displayName"|s$ Sorry for askin... See more...
Thank you for your advice, in this case if my token name is for example "actor.displayName" in this case in the main query in need to wrap it like this? :  $"actor.displayName"|s$ Sorry for asking probably very basic question...
Hello everyone I have a problem with the Splunk Adon "IBM QRadar SOAR Add-on for Splunk". We were able to install the add-on successfully. When creating a new alert you can also select the alert ac... See more...
Hello everyone I have a problem with the Splunk Adon "IBM QRadar SOAR Add-on for Splunk". We were able to install the add-on successfully. When creating a new alert you can also select the alert action. However, the form for the individual fields for Qraddar is not displayed for me. However, it works for the Splunk team members. According to the Splunk team, the only difference between me and them is that they have administrator rights. Is it correct that the alert action can only be used with administrator rights? Thank you
Hi If I have understood right you could/should define used splunk version on configuration when you are building this up? See: https://splunk.github.io/splunk-operator/SplunkOperatorUpgrade.html Con... See more...
Hi If I have understood right you could/should define used splunk version on configuration when you are building this up? See: https://splunk.github.io/splunk-operator/SplunkOperatorUpgrade.html Configuring Operator to watch specific namespace Configuring Operator to watch specific namespace Under "Configuring Operator to watch specific namespace" are example where Splunk Enterprise version has defined. r. Ismo
How about add 'by "User ID"' to the end of timechart?
Hi this is not for SOAR, but I have used it on HF towards O365 mail server to get emails into splunk. https://github.com/soutamo/TA-mailclient r. Ismo
After speaking to our local Splunk admin, what I am trying to do is not possible. So decided to break it into the 2 searches; 1 correlation search and then a drill down. Then we're building a playboo... See more...
After speaking to our local Splunk admin, what I am trying to do is not possible. So decided to break it into the 2 searches; 1 correlation search and then a drill down. Then we're building a playbook to auto-close the alert if the drill down finds hits.  I was trying to build this alert to not hit SOAR and thus reduce resources on our Splunk instance, but this was not possible in this manner.