All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost... See more...
I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost", port=8089, username="admin", password="******") savedsearch = splunk_svc.saved_searches["alert-splnk-test_email_v1"] new_skew = "5m" kwargs = {"allow_skew": new_skew} savedsearch.update(**kwargs).refresh()   This code works and adds 'allow_skew = 5m' to the specific savedsearch stanzas in {app/local OR system/local} / savedsearches.conf / [alert-splnk-test_email_v1] The code can also be extended to more/all savedsearches on the platform. It also replicates the changes in a SH Cluster, as expected. I want to have a reliable way to remove/erase the allow_skew setting from specific savedsearches, preferably using the Python SDK. The setting needs to be removed from the stanza, so that the allow_skew setting from system / local / savedsearches.conf / [default] is picked up. The only other ways I could think about are:  Using the class splunklib.client.Stanza(service, path, **kwargs) somehow.. Any directions on how to use it?  Recreate the savedsearch and not add allow_skew, but that would mean a lot of work for a bunch of savedearches.  Any help is appreciated. 
Hi, i'm trying to calculate the average events weekly by their severity and comparing the daily amount with the weekly average, i created a multivalue field but the values in the field get reordered... See more...
Hi, i'm trying to calculate the average events weekly by their severity and comparing the daily amount with the weekly average, i created a multivalue field but the values in the field get reordered and they don't match the rest of the data (the severity multivalue field),  I tried using mvsort() but it did not work, what did i do wrong? Thank you for any help. Query, results and expected results below:   index=myindex earliest=-7d@d latest=now()     | bin _time span=1d     | fields _time, severity     | stats count by _time, severity     | eventstats avg(count) as average by severity     | eval change_percent=round(((count-average)*100)/count,0)     | eval average=round(average,2)     | eval change_percent=change_percent+"%"     | table _time severity count average change_percent     | stats values(severity) as severity, values(count) as AlertCount, values(average) as average, values(change_percent) as change_percent by _time     | sort - _time     | eval average=mvsort(average)     | eval change_percent=mvsort(change_percent)     | eval AlertCount=mvsort(AlertCount)     | eval severity=mvsort(severity)   results:   _time severity AlertCount average change_percent 2022-12-23 High Informational 3 8 3.25 3.67 -22% 59% 2022-12-22 High 1 3.25 -225% 2022-12-21 High Informational 3 3.25 3.67 -22% -8% 2022-12-20 High 4 3.25 19% 2022-12-19 High Informational Medium 1 2 5 2.00 3.25 3.67 -100% -62% 27%     expected results:   _time severity AlertCount average change_percent _time severity AlertCount average change_percent 2022-12-23 High Informational 3 8 3.25 3.67 -22% 59% 2022-12-22 High 1 3.25 -225% 2022-12-21 High Informational 3 3.25 3.67 -8% -22% 2022-12-20 High 4 3.25 19% 2022-12-19 High Informational Medium 1 2 5 3.25 3.67 2.00 -225% -83,5% 60%
I have setup servicenow to splunk integration and coming to the inputs, I have turned on  the Splunk sys user group as well as splunk sys user input. Currently I am getting Assignment group name, doe... See more...
I have setup servicenow to splunk integration and coming to the inputs, I have turned on  the Splunk sys user group as well as splunk sys user input. Currently I am getting Assignment group name, does anyone have an idea whether we can get assignment group member details as well from servicenow to splunk?  i have hard time finding it, pls share your thoughts!
I want to run a query that would return results of users added to a group. This would assist me in creating an alert that notifies me when a new user is added to this group
Hello everyone, I need to create BTs from the value of an attribute. Example, If codigo == 1 must be a BT, elseif codigo== 3 must be another BT. The source code is written in Java and I'm us... See more...
Hello everyone, I need to create BTs from the value of an attribute. Example, If codigo == 1 must be a BT, elseif codigo== 3 must be another BT. The source code is written in Java and I'm using Pojo. Thanks.
Hi at all, In Enterprise Security, I'm trying to customize a Suppression Rule inserting a lookup containing the ip addresses to whitelist in one Correlation Search, using this search:     `get_no... See more...
Hi at all, In Enterprise Security, I'm trying to customize a Suppression Rule inserting a lookup containing the ip addresses to whitelist in one Correlation Search, using this search:     `get_notable_index` source="Network - Vulnerability Scanner Detection (by targets) - Rule" [ | inputlookup suppression_ip.csv | fields src ]     and I have the following error message:     Error saving suppression. Error parsing search.     I also tryed to replace the subsearch using a macro, but with the same result. Does anywone know if there's a limitation in Suppression Rules searches (as e.g. eventtypes) or what else? Ciao. Giuseppe
Hi, i'm struggling in calculating hourly or daily average and displaying the results if there's no events at all, which in theory should be calculated as 0 and included in the avg calculation. Curre... See more...
Hi, i'm struggling in calculating hourly or daily average and displaying the results if there's no events at all, which in theory should be calculated as 0 and included in the avg calculation. Currently my query calculates the avg in a given timespan only if there are events for a specific severity, if not it remains blank and is not included in the avg calculation, query, results and expected results:       index=myindex earliest=-7d@d latest=now()     | bin _time span=1h     | fields _time, severity     | stats count by _time, severity     | eval previous=if(_time<relative_time(now(),"@d"),count,null())     | eventstats avg(previous) as average by severity     | eval change_percent=round(((count-average)*100)/count,0)."%"     | table _time severity count average change_percent   _time severity count average change_percent 2022-12-16 10:00 High 2 2.25 -12% 2022-12-16 12:00 Low 2 2 0% 2022-12-16 14:00 Medium 3 2 33%     i'd like to show something like this: _time severity count average change_percent 2022-12-16 10:00-11:00 High 2 0.5 x% 2022-12-16 10:00-11:00 Medium 0 1 -x% 2022-12-16 10:00-11:00  Low 0 1 x% 2022-12-16 11:00-12:00 High 0 0.5  x% 2022-12-16 11:00-12:00 Medium 0 1  x% 2022-12-16 11:00-12:00 Low 0 1  x% 2022-12-16 12:00-13:00 High 0 0.5  x% 2022-12-16 12:00-13:00 Medium 0 1  x% 2022-12-16 12:00-13:00 Low 2 1 x%   thank you for any help 
Hello, I am trying to add a data input to an app I created using Splunk Add-on Builder. I enabled checkpointing and specified a checkpoint parameter name (last_updated) but in the UI for data input,... See more...
Hello, I am trying to add a data input to an app I created using Splunk Add-on Builder. I enabled checkpointing and specified a checkpoint parameter name (last_updated) but in the UI for data input, it gives the following error: "The following required arguments are missing: last_updated." There is no section in the UI that lets me input a 'last_updated' argument. Thanks
Search query for including non-business hours and weekends ie exclude Monday to Friday 9am to 5pm 
Hi, I need the JSON array in Splunk `List` view to be expanded by default instead of showing the Plus icon. I have a Splunk event which is a JSON array: [{ "cf_app_id": "uuid", "cf_app_name": "a... See more...
Hi, I need the JSON array in Splunk `List` view to be expanded by default instead of showing the Plus icon. I have a Splunk event which is a JSON array: [{ "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the start of the transaction", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690242714069 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : app log text", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243292964 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : another app log", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243306564 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : {\"data\":{\"fields\":[{\"__typename\":\"name\",\"field\":\"value\",\"field2\":\"value2\",\"field3\":\"value 3\",\"field4\":\"value4\",\"field5\":\"value5\",\"field6\":\"value6\",\"field7\":\"value7\",\"field8\":null,\"field9\":\"value9\",\"field10\":null,\"field11\":111059.0,\"field12\":111059.0,\"field13\":null,\"field14\":\"value14\",\"field15\":\"2018-10-01\",\"field16\":null,\"field17\":false,\"field18\":{\"field19\":\"value19\",\"fieldl20\":\"value20\",\"field21\":2.6,\"field22\":\"2031-10-31\",\"field23\":\"2017-11-06\"},\"field24\":{\"field25\":\"\",\"field26\":\"\"},\"field27\":{\"field28\":{\"field29\":0.0,\"field30\":0.0,\"field31\":240.63,\"field32\":\"2022-12-31\",\"field33\":0.0,\"field34\":\"9999-10-31\"}},\"field35\":[{\"field36\":{\"field37\":\"value37\"}},{\"field38\":{\"field39\":\"value39\"}}],\"field40\":{\"__typename\":\"value40\",\"field41\":\"value41\",\"field42\":\"value 42\",\"field43\":111059.0,\"field44\":\"2031-04-01\",\"field45\":65204.67,\"field46\":null,\"field47\":\"value47\",\"field48\":\"value48\",\"field49\":null,\"field50\":\"value50\",\"field51\":null,\"field52\":null}},{\"__typename\":\"value53\",\"field54\":\"value54\",\"field55\":\"value55\",\"field56\":\"value56\",\"field57\":\"value57\",\"field58\":\"value58\",\"field59\":\"9\",\"field60\":\"value60\",\"field61\":null,\"field62\":\"value62\",\"field63\":null,\"field64\":88841.0,\"field65\":38841.0,\"field66\":null,\"field67\":\"value67\",\"field68\":\"2018-10-01\",\"field69\":null,\"field70\":false,\"field71\":{\"field72\":\"value72\",\"field73\":\"value73\",\"field74\":2.6,\"field75\":\"2031-10-31\",\"field76\":\"2017-11-06\"},\"field77\":{\"field78\":\"\",\"field79\":\"\"},\"field80\":{\"field81\":{\"field82\":0.0,\"field83\":0.0,\"field84\":84.16,\"field85\":\"2022-12-31\",\"field86\":0.0,\"field87\":\"9999-10-31\"}},\"field88\":[{\"field89\":{\"field90\":\"value90\"}},{\"field91\":{\"field92\":\"value92\"}}],\"field93\":null},{\"__typename\":\"value94\",\"field95\":\"value95\",\"field96\":\"value96\",\"field97\":\"value97\",\"field98\":\"value98\",\"field99\":\"value99\",\"field100\":\"1\",\"field101\":\"value101\",\"field102\":null,\"field103\":\"value103\",\"field104\":\"359\",\"field105\":88025.0,\"field106\":79316.87,\"field107\":\"309\",\"field108\":\"value108\",\"field109\":\"2018-10-01\",\"field110\":\"2048-09-30\",\"field111\":false,\"field112\":{\"field113\":\"value113\",\"field114\":\"value114\",\"field115\":2.35,\"field116\":\"2031-10-31\",\"field117\":\"2017-11-06\"},\"field118\":{\"field119\":\"\",\"field120\":\"\"},\"field121\":{\"field122\":{\"field123\":341.58,\"field124\":0.0,\"field125\":155.33,\"field126\":\"2022-12-31\",\"field127\":186.25,\"field128\":\"2022-12-31\"}},\"field129\":[{\"field130\":{\"field131\":\"value131\"}},{\"field132\":{\"field133\":\"value133\"}}],\"field134\":null}]}}", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243306564 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the end of the transaction", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690870483226 } ] When I open this in Splunk website in List view then I had to manually click on `plus` icon to expand each JSON in the Splunk event. Is there an option to make them expanded by default so that I can click on `minus` sign to minimise it if I want to  
There is a threat log with 2 sub_types (url and vulnerability) and sample data are as below. panwlogs-,2022-12-15T08:42:04.000000Z,no-serial,THREAT,url,10.0,2022-12-15T08:41:45.000000Z,x.x.x.x,x,x,u... See more...
There is a threat log with 2 sub_types (url and vulnerability) and sample data are as below. panwlogs-,2022-12-15T08:42:04.000000Z,no-serial,THREAT,url,10.0,2022-12-15T08:41:45.000000Z,x.x.x.x,x,x,user,,ssl,vsys1,x,untrust,tunnel.101,ethernet1/1,x,560330,1,60906,8292,55427,8292,protocol,action,7317,713,6604,15,2022-12-15T08:39:46.000000Z,0,any,4912899,src_location,US,6,9,decrypt-cert-validation,65541,65542,65550,0,,x,from-policy,,,0,,0,1970-01-01T00:00:00.000000Z,N/A,0,0,0,0,x,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2022-12-15T08:41:46.419000Z,,   panwlogs-,2022-12-14T14:06:10.000000Z,no-serial,THREAT,vulnerability,10.0,2022-12-14T14:06:05.000000Z,src_ip,dest_ip,nat_src_ip,dest_ip,rule,src_user,,echo,vsys1,usodev,untrust,tunnel.102,ethernet1/1,log_forwarding,230581,6,45060,7,34147,7,protocol,action,,threat_id,Informational,client to server,174106,1src_location,dest_location,0,,,0,,,,,0,65541,65542,65550,0,,usodev,,,0,,0,1970-01-01T00:00:00.000000Z,N/A,protocol-anomaly,session_id,0x2,00000000-0000-0000-2300-000000000000,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0,2022-12-14T14:06:05.521000Z,   Both events have different set of fields. If the sub_type is url, one set of field extraction should happen, if the sub_type is vulnerability, second set of field extraction should happen. The requirement is to combine both the sub_types under same sourcetype "threat". Is it possible to do so ? props.conf   [threat] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EXTRACT-log_st=(?:THREAT,)(?<sub_type>.*?), EVAL-extract_threat=case(sub_type="url", "extract_url", sub_type="vulnerability" ,"extract_vulnerability") REPORT-search = "Is it possible to pass extract_url or extract_vulnerability based on the event ?"   transforms.conf [extract_url] DELIMS = "," FIELDS = URL_field1,url_field2...   [extract_vulnerability] DELIMS = "," FIELDS = vul_field1,vul_field2....
I have a requirement to pull 90% of max execution time.   Ex: I have 10 requests for an hour and it's execution times as below. Out of which if I take max(Executation_time) I will get 10 sec but I ... See more...
I have a requirement to pull 90% of max execution time.   Ex: I have 10 requests for an hour and it's execution times as below. Out of which if I take max(Executation_time) I will get 10 sec but I want to give 10% leverage and consider max Time from 90% of ExecutionTimes.   I will be getting total number of executation details(10 in this ex) through a seach like `stats count(_raw) by Hour'. Now I have to take 10% record counts and neglect those number of records to get 90% of max Time   Tra. Executation_Time 1. 10 Sec 2. 9 Sec 3. 8 sec 4. 7 sec 5. 6 sec 6. 5sec 7. 4 sec 8. 3 sec 9. 2 sec 10. 1 sec
I want to set a Schedule for my search to find the data sent by user in our system . This is my search to catch each user sent more 2GB  I am used bin _time span=2h but maybe it not correct,... See more...
I want to set a Schedule for my search to find the data sent by user in our system . This is my search to catch each user sent more 2GB  I am used bin _time span=2h but maybe it not correct, it Incremental for every 2hours later.  So how can I take a search mean who sent more 2GB data in each 2 hours ?? Many thanks !!!
Hello, everyone I've "all-in-one" splunk installation, configured syslog input, but input messages are rejected. Below messages from splunkd.log 12-21-2022 09:24:24.966 +0300 ERROR TcpInputProc - ... See more...
Hello, everyone I've "all-in-one" splunk installation, configured syslog input, but input messages are rejected. Below messages from splunkd.log 12-21-2022 09:24:24.966 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:60020 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:24.969 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009987646 bytes from src=*:60032 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:24.975 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:60034 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:31.739 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:49684 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.   Tried to increase queueSize in inputs.conf, but without success result
I received red alarms from health status. The types of alarm vary over time. but the warnings that continuously occur are Ingestion Latency, IOWait, Searches Delayed, etc. And the Detail message d... See more...
I received red alarms from health status. The types of alarm vary over time. but the warnings that continuously occur are Ingestion Latency, IOWait, Searches Delayed, etc. And the Detail message displays 'Splunkd's processing queue is full.' Is there any way to check which process is in the queue? OR is there a way to flush the queue? I increased CPU and memory, but the problem was not solved. And I recently upgraded the Splunk version from 8.1.4 to 9.0.2. Thank you.
Hi, I'm using this search to join the apps with their respective SAML group roles   | rest /services/authentication/users splunk_server=local | table defaultApp defaultAppSourceRole title rol... See more...
Hi, I'm using this search to join the apps with their respective SAML group roles   | rest /services/authentication/users splunk_server=local | table defaultApp defaultAppSourceRole title roles | rename defaultApp as splunk_app_name defaultAppSourceRole as defaultrole title as User | eval splunk_app_name= lower(splunk_app_name) | join defaultrole type=outer [| rest /services/admin/SAML-groups | table roles title id | rename roles as defaultrole title as idm_role_name] |dedup splunk_app_name,id     The only issue is I'm not getting all of the apps with this rest call (probably 2/3rd of all apps)     | rest /services/authentication/users splunk_server=local     I've tried using other calls like | rest /services/authorization/roles | rest /services/apps/local but couldn't join them with SAML REST call I need help finding a way to show all apps and then merge it with their SAML groups roles Thank you
Afternoon, We are running a Splunk Enterprise 8.2.7.1 deployment utilizing DOD CA Certs and wiredtiger as our kvstore engine. We have a DEV env (that has a 3 SHC member) and a PROD (5 SHC member) wi... See more...
Afternoon, We are running a Splunk Enterprise 8.2.7.1 deployment utilizing DOD CA Certs and wiredtiger as our kvstore engine. We have a DEV env (that has a 3 SHC member) and a PROD (5 SHC member) with multi site Indexer cluster. we are seeing the below KV store errors on 2 of our 5 SHC members. Can we get some guidance/assitance please: KV Store changed status to failed. An error occurred during the last operation ('getServerVersion', domain: '1', code: '11'): Could not find user'
I want to convert this query to tstats for faster searching  can you help me convert it  index=win-security host=srv001 user IN ("*adminuser") [ search index=paloalto sourcetype=pan:threat]
I'm trying to run -      | tstats count where index=wineventlog* TERM(EventID=4688) by _time span=1m     It returns no results but specifying just the term's value seems to work -  ... See more...
I'm trying to run -      | tstats count where index=wineventlog* TERM(EventID=4688) by _time span=1m     It returns no results but specifying just the term's value seems to work -    | tstats count where index=wineventlog* TERM(4624) by _time span=1m   https://conf.splunk.com/files/2020/slides/PLA1089C.pdf explains the subject well but my simple query is not working.
How do I block a Process ID for winhostmon.  This is what I have in inputs.conf [WinHostMon://Process] interval = 600 disabled = 0 type = process blacklist = ProcessId="0" index = windows