Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with...
See more...
Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with the following result ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) 1.1.1.1 3 2 7 23 2.2.2.2 3 1 4 10 After adding "stats values(score) as score by ip vuln" above the current stats ip, count(vuln) no longer calculated the count of non distinct/original vuln (7=>3, 4=>3) sum(score) no longer calculated the count of non distinct/original score (23=>10, 10=>5) ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 *3 *10 10 2.2.2.2 3 1 *3 *5 5 This is what I would like to have ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 7 23 10 2.2.2.2 3 1 4 10 5
This name already comes from OKTA logs with dot, unfortunately I wont be able to change it. Need to work with what I have. Thank you for help! A appreciate it!
Hello, we are trying to work out how much data our Splunk instances search through on average. so we've written a search that tells us our platform is running 75-80,000 searches a day, this would be...
See more...
Hello, we are trying to work out how much data our Splunk instances search through on average. so we've written a search that tells us our platform is running 75-80,000 searches a day, this would be only a few manual searches and the rest coming from saved / correlation searches. Is there anywhere in the system or a search we can write that would say for instance these 75,000 searches, searched through a total of 750gb of data... We are researching the possibility of moving to a platform that costs per search, so if we can get these figures we can see how much a like for like replacement would actually cost.
That lists all USer IDs that have over 10 disconnects. I need the total number of users that have disconnected in that time frame. I essentially need to add the number of USER IDs that have over 10...
See more...
That lists all USer IDs that have over 10 disconnects. I need the total number of users that have disconnected in that time frame. I essentially need to add the number of USER IDs that have over 10. Just one number.
$actor.displayName|s$ Having said that, you should probably avoid using dot in names where possible, so perhaps name your token as actorDisplayName and use $actorDisplayName|s$
The stats command can count the number of disconnects for each user. Then filter out users with fewer than ten disconnects. index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED...
See more...
The stats command can count the number of disconnects for each user. Then filter out users with fewer than ten disconnects. index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=host2) OR host=Host1) earliest=$time_tok.earliest$ latest=$time_tok.latest$
| rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds
| stats count by IONS
| where count >= 10
| rename IONS as "User ID"
Hi, I'm trying to utilize the new feature as adding custom field in Asset & Identity Framework but I'm getting a error after adding the new field. Thanks for your help!!..
Thank you for your advice, in this case if my token name is for example "actor.displayName" in this case in the main query in need to wrap it like this? : $"actor.displayName"|s$ Sorry for askin...
See more...
Thank you for your advice, in this case if my token name is for example "actor.displayName" in this case in the main query in need to wrap it like this? : $"actor.displayName"|s$ Sorry for asking probably very basic question...
Hello everyone I have a problem with the Splunk Adon "IBM QRadar SOAR Add-on for Splunk". We were able to install the add-on successfully. When creating a new alert you can also select the alert ac...
See more...
Hello everyone I have a problem with the Splunk Adon "IBM QRadar SOAR Add-on for Splunk". We were able to install the add-on successfully. When creating a new alert you can also select the alert action. However, the form for the individual fields for Qraddar is not displayed for me. However, it works for the Splunk team members. According to the Splunk team, the only difference between me and them is that they have administrator rights. Is it correct that the alert action can only be used with administrator rights? Thank you
Hi If I have understood right you could/should define used splunk version on configuration when you are building this up? See: https://splunk.github.io/splunk-operator/SplunkOperatorUpgrade.html Con...
See more...
Hi If I have understood right you could/should define used splunk version on configuration when you are building this up? See: https://splunk.github.io/splunk-operator/SplunkOperatorUpgrade.html Configuring Operator to watch specific namespace Configuring Operator to watch specific namespace Under "Configuring Operator to watch specific namespace" are example where Splunk Enterprise version has defined. r. Ismo
After speaking to our local Splunk admin, what I am trying to do is not possible. So decided to break it into the 2 searches; 1 correlation search and then a drill down. Then we're building a playboo...
See more...
After speaking to our local Splunk admin, what I am trying to do is not possible. So decided to break it into the 2 searches; 1 correlation search and then a drill down. Then we're building a playbook to auto-close the alert if the drill down finds hits. I was trying to build this alert to not hit SOAR and thus reduce resources on our Splunk instance, but this was not possible in this manner.
Hi one way is move this to transforms.conf and use MV_ADD = 1 like in e.g. this https://community.splunk.com/t5/Splunk-Search/How-to-extract-a-field-that-appears-several-times-but-with/m-p/181008 r...
See more...
Hi one way is move this to transforms.conf and use MV_ADD = 1 like in e.g. this https://community.splunk.com/t5/Splunk-Search/How-to-extract-a-field-that-appears-several-times-but-with/m-p/181008 r. Ismo
Rather than trying to remove the spaces, why not consider wrapping the value in quotes where it is used $token_name|s$ https://docs.splunk.com/Documentation/Splunk/9.1.1/Viz/tokens#Token_filters
I can get total disconnects but can't seem to find a way to get total of how may users who disconnected 10 or more times. Here is my search: index=gbts-vconnection sourcetype=VMWareVDM_debug...
See more...
I can get total disconnects but can't seem to find a way to get total of how may users who disconnected 10 or more times. Here is my search: index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=host2) OR host=Host1) earliest=$time_tok.earliest$ latest=$time_tok.latest$ | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | rename IONS as "User ID" Device as "User Device" | convert timeformat="%m-%d-%Y" ctime(_time) AS date |timechart span=1d limit=0 , count
Hi as "Splunk Enterprise version 8.2 is no longer supported as of September 30, 2023. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see...
See more...
Hi as "Splunk Enterprise version 8.2 is no longer supported as of September 30, 2023. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise." it's best to go to 9.0.6. Probably the biggest issue could be python2 if you are using it on some apps or modules. You could check that by "Upgrade readiness app". Just ensure that it's running on your environment and give you a valid responses. Also you should read https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform With those you should manage for updating the environment. Of course if you have distributed multisite environment with search head cluster and some enterprise apps then those instructions are not enough for any new admin. Then you should have some test environment and/or ask help from Splunk professional services or some other company which are concentrating to Splunk. r. Ismo
Hi much better option is use some real syslog server or SC4S to collect syslogs. And try to avoid use UDP as it always lost packets! * If the data source is streamed over TCP or UDP, such as syslog...
See more...
Hi much better option is use some real syslog server or SC4S to collect syslogs. And try to avoid use UDP as it always lost packets! * If the data source is streamed over TCP or UDP, such as syslog sources,
only one pipeline will be used. Based on that you cannot increase the UDP performance with adding pipelines. r. Ismo