All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need the JSON array in Splunk `List` view to be expanded by default instead of showing the Plus icon. I have a Splunk event which is a JSON array: [{ "cf_app_id": "uuid", "cf_app_name": "a... See more...
Hi, I need the JSON array in Splunk `List` view to be expanded by default instead of showing the Plus icon. I have a Splunk event which is a JSON array: [{ "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the start of the transaction", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690242714069 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : app log text", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243292964 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : another app log", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243306564 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : {\"data\":{\"fields\":[{\"__typename\":\"name\",\"field\":\"value\",\"field2\":\"value2\",\"field3\":\"value 3\",\"field4\":\"value4\",\"field5\":\"value5\",\"field6\":\"value6\",\"field7\":\"value7\",\"field8\":null,\"field9\":\"value9\",\"field10\":null,\"field11\":111059.0,\"field12\":111059.0,\"field13\":null,\"field14\":\"value14\",\"field15\":\"2018-10-01\",\"field16\":null,\"field17\":false,\"field18\":{\"field19\":\"value19\",\"fieldl20\":\"value20\",\"field21\":2.6,\"field22\":\"2031-10-31\",\"field23\":\"2017-11-06\"},\"field24\":{\"field25\":\"\",\"field26\":\"\"},\"field27\":{\"field28\":{\"field29\":0.0,\"field30\":0.0,\"field31\":240.63,\"field32\":\"2022-12-31\",\"field33\":0.0,\"field34\":\"9999-10-31\"}},\"field35\":[{\"field36\":{\"field37\":\"value37\"}},{\"field38\":{\"field39\":\"value39\"}}],\"field40\":{\"__typename\":\"value40\",\"field41\":\"value41\",\"field42\":\"value 42\",\"field43\":111059.0,\"field44\":\"2031-04-01\",\"field45\":65204.67,\"field46\":null,\"field47\":\"value47\",\"field48\":\"value48\",\"field49\":null,\"field50\":\"value50\",\"field51\":null,\"field52\":null}},{\"__typename\":\"value53\",\"field54\":\"value54\",\"field55\":\"value55\",\"field56\":\"value56\",\"field57\":\"value57\",\"field58\":\"value58\",\"field59\":\"9\",\"field60\":\"value60\",\"field61\":null,\"field62\":\"value62\",\"field63\":null,\"field64\":88841.0,\"field65\":38841.0,\"field66\":null,\"field67\":\"value67\",\"field68\":\"2018-10-01\",\"field69\":null,\"field70\":false,\"field71\":{\"field72\":\"value72\",\"field73\":\"value73\",\"field74\":2.6,\"field75\":\"2031-10-31\",\"field76\":\"2017-11-06\"},\"field77\":{\"field78\":\"\",\"field79\":\"\"},\"field80\":{\"field81\":{\"field82\":0.0,\"field83\":0.0,\"field84\":84.16,\"field85\":\"2022-12-31\",\"field86\":0.0,\"field87\":\"9999-10-31\"}},\"field88\":[{\"field89\":{\"field90\":\"value90\"}},{\"field91\":{\"field92\":\"value92\"}}],\"field93\":null},{\"__typename\":\"value94\",\"field95\":\"value95\",\"field96\":\"value96\",\"field97\":\"value97\",\"field98\":\"value98\",\"field99\":\"value99\",\"field100\":\"1\",\"field101\":\"value101\",\"field102\":null,\"field103\":\"value103\",\"field104\":\"359\",\"field105\":88025.0,\"field106\":79316.87,\"field107\":\"309\",\"field108\":\"value108\",\"field109\":\"2018-10-01\",\"field110\":\"2048-09-30\",\"field111\":false,\"field112\":{\"field113\":\"value113\",\"field114\":\"value114\",\"field115\":2.35,\"field116\":\"2031-10-31\",\"field117\":\"2017-11-06\"},\"field118\":{\"field119\":\"\",\"field120\":\"\"},\"field121\":{\"field122\":{\"field123\":341.58,\"field124\":0.0,\"field125\":155.33,\"field126\":\"2022-12-31\",\"field127\":186.25,\"field128\":\"2022-12-31\"}},\"field129\":[{\"field130\":{\"field131\":\"value131\"}},{\"field132\":{\"field133\":\"value133\"}}],\"field134\":null}]}}", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690243306564 }, { "cf_app_id": "uuid", "cf_app_name": "app-name", "deployment": "cf", "event_type": "LogMessage", "info_splunk_index": "splunk-index", "ip": "ipaddr", "message_type": "OUT", "msg": "2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the end of the transaction", "origin": "rep", "source_instance": "0", "source_type": "APP/PROC/WEB", "timestamp": 1671732690870483226 } ] When I open this in Splunk website in List view then I had to manually click on `plus` icon to expand each JSON in the Splunk event. Is there an option to make them expanded by default so that I can click on `minus` sign to minimise it if I want to  
There is a threat log with 2 sub_types (url and vulnerability) and sample data are as below. panwlogs-,2022-12-15T08:42:04.000000Z,no-serial,THREAT,url,10.0,2022-12-15T08:41:45.000000Z,x.x.x.x,x,x,u... See more...
There is a threat log with 2 sub_types (url and vulnerability) and sample data are as below. panwlogs-,2022-12-15T08:42:04.000000Z,no-serial,THREAT,url,10.0,2022-12-15T08:41:45.000000Z,x.x.x.x,x,x,user,,ssl,vsys1,x,untrust,tunnel.101,ethernet1/1,x,560330,1,60906,8292,55427,8292,protocol,action,7317,713,6604,15,2022-12-15T08:39:46.000000Z,0,any,4912899,src_location,US,6,9,decrypt-cert-validation,65541,65542,65550,0,,x,from-policy,,,0,,0,1970-01-01T00:00:00.000000Z,N/A,0,0,0,0,x,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2022-12-15T08:41:46.419000Z,,   panwlogs-,2022-12-14T14:06:10.000000Z,no-serial,THREAT,vulnerability,10.0,2022-12-14T14:06:05.000000Z,src_ip,dest_ip,nat_src_ip,dest_ip,rule,src_user,,echo,vsys1,usodev,untrust,tunnel.102,ethernet1/1,log_forwarding,230581,6,45060,7,34147,7,protocol,action,,threat_id,Informational,client to server,174106,1src_location,dest_location,0,,,0,,,,,0,65541,65542,65550,0,,usodev,,,0,,0,1970-01-01T00:00:00.000000Z,N/A,protocol-anomaly,session_id,0x2,00000000-0000-0000-2300-000000000000,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0,2022-12-14T14:06:05.521000Z,   Both events have different set of fields. If the sub_type is url, one set of field extraction should happen, if the sub_type is vulnerability, second set of field extraction should happen. The requirement is to combine both the sub_types under same sourcetype "threat". Is it possible to do so ? props.conf   [threat] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EXTRACT-log_st=(?:THREAT,)(?<sub_type>.*?), EVAL-extract_threat=case(sub_type="url", "extract_url", sub_type="vulnerability" ,"extract_vulnerability") REPORT-search = "Is it possible to pass extract_url or extract_vulnerability based on the event ?"   transforms.conf [extract_url] DELIMS = "," FIELDS = URL_field1,url_field2...   [extract_vulnerability] DELIMS = "," FIELDS = vul_field1,vul_field2....
I have a requirement to pull 90% of max execution time.   Ex: I have 10 requests for an hour and it's execution times as below. Out of which if I take max(Executation_time) I will get 10 sec but I ... See more...
I have a requirement to pull 90% of max execution time.   Ex: I have 10 requests for an hour and it's execution times as below. Out of which if I take max(Executation_time) I will get 10 sec but I want to give 10% leverage and consider max Time from 90% of ExecutionTimes.   I will be getting total number of executation details(10 in this ex) through a seach like `stats count(_raw) by Hour'. Now I have to take 10% record counts and neglect those number of records to get 90% of max Time   Tra. Executation_Time 1. 10 Sec 2. 9 Sec 3. 8 sec 4. 7 sec 5. 6 sec 6. 5sec 7. 4 sec 8. 3 sec 9. 2 sec 10. 1 sec
I want to set a Schedule for my search to find the data sent by user in our system . This is my search to catch each user sent more 2GB  I am used bin _time span=2h but maybe it not correct,... See more...
I want to set a Schedule for my search to find the data sent by user in our system . This is my search to catch each user sent more 2GB  I am used bin _time span=2h but maybe it not correct, it Incremental for every 2hours later.  So how can I take a search mean who sent more 2GB data in each 2 hours ?? Many thanks !!!
Hello, everyone I've "all-in-one" splunk installation, configured syslog input, but input messages are rejected. Below messages from splunkd.log 12-21-2022 09:24:24.966 +0300 ERROR TcpInputProc - ... See more...
Hello, everyone I've "all-in-one" splunk installation, configured syslog input, but input messages are rejected. Below messages from splunkd.log 12-21-2022 09:24:24.966 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:60020 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:24.969 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009987646 bytes from src=*:60032 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:24.975 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:60034 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. 12-21-2022 09:24:31.739 +0300 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1009858353 bytes from src=*:49684 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.   Tried to increase queueSize in inputs.conf, but without success result
I received red alarms from health status. The types of alarm vary over time. but the warnings that continuously occur are Ingestion Latency, IOWait, Searches Delayed, etc. And the Detail message d... See more...
I received red alarms from health status. The types of alarm vary over time. but the warnings that continuously occur are Ingestion Latency, IOWait, Searches Delayed, etc. And the Detail message displays 'Splunkd's processing queue is full.' Is there any way to check which process is in the queue? OR is there a way to flush the queue? I increased CPU and memory, but the problem was not solved. And I recently upgraded the Splunk version from 8.1.4 to 9.0.2. Thank you.
Hi, I'm using this search to join the apps with their respective SAML group roles   | rest /services/authentication/users splunk_server=local | table defaultApp defaultAppSourceRole title rol... See more...
Hi, I'm using this search to join the apps with their respective SAML group roles   | rest /services/authentication/users splunk_server=local | table defaultApp defaultAppSourceRole title roles | rename defaultApp as splunk_app_name defaultAppSourceRole as defaultrole title as User | eval splunk_app_name= lower(splunk_app_name) | join defaultrole type=outer [| rest /services/admin/SAML-groups | table roles title id | rename roles as defaultrole title as idm_role_name] |dedup splunk_app_name,id     The only issue is I'm not getting all of the apps with this rest call (probably 2/3rd of all apps)     | rest /services/authentication/users splunk_server=local     I've tried using other calls like | rest /services/authorization/roles | rest /services/apps/local but couldn't join them with SAML REST call I need help finding a way to show all apps and then merge it with their SAML groups roles Thank you
Afternoon, We are running a Splunk Enterprise 8.2.7.1 deployment utilizing DOD CA Certs and wiredtiger as our kvstore engine. We have a DEV env (that has a 3 SHC member) and a PROD (5 SHC member) wi... See more...
Afternoon, We are running a Splunk Enterprise 8.2.7.1 deployment utilizing DOD CA Certs and wiredtiger as our kvstore engine. We have a DEV env (that has a 3 SHC member) and a PROD (5 SHC member) with multi site Indexer cluster. we are seeing the below KV store errors on 2 of our 5 SHC members. Can we get some guidance/assitance please: KV Store changed status to failed. An error occurred during the last operation ('getServerVersion', domain: '1', code: '11'): Could not find user'
I want to convert this query to tstats for faster searching  can you help me convert it  index=win-security host=srv001 user IN ("*adminuser") [ search index=paloalto sourcetype=pan:threat]
I'm trying to run -      | tstats count where index=wineventlog* TERM(EventID=4688) by _time span=1m     It returns no results but specifying just the term's value seems to work -  ... See more...
I'm trying to run -      | tstats count where index=wineventlog* TERM(EventID=4688) by _time span=1m     It returns no results but specifying just the term's value seems to work -    | tstats count where index=wineventlog* TERM(4624) by _time span=1m   https://conf.splunk.com/files/2020/slides/PLA1089C.pdf explains the subject well but my simple query is not working.
How do I block a Process ID for winhostmon.  This is what I have in inputs.conf [WinHostMon://Process] interval = 600 disabled = 0 type = process blacklist = ProcessId="0" index = windows
Hello, I am trying to figure out why the script for my dashboard will not produce results. I am getting the error below. "..//appname/appserver/static/components/html2canvas ” was blocked due to MI... See more...
Hello, I am trying to figure out why the script for my dashboard will not produce results. I am getting the error below. "..//appname/appserver/static/components/html2canvas ” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff) Dashboard XML   <panel> <html> <div> <input id="btn-submit" type="button" class="btn btn-primary" value="Download Screenshot"/> </div> </html> </panel>   JS in ...//appserver/static/test.js   require([ "underscore", "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!", "//splunk/etc/apps/appname/appserver/static/components/html2canvas", ], function(_, $, mvc,html2canvas) { $("#btn-submit").on("click", function(e){ var screenshot = require("//splunk/etc/apps/appname/appserver/static/components/html2canvas"); screenshot(document.querySelector("#test1"), {scale:2}).then(canvas => { console.log(canvas); var image = canvas.toDataURL("image/png").replace("image/png", "image/octet-stream"); var link = document.createElement('a'); link.download = "Dashboard Report.png"; link.href = image; link.click(); }); }); });   Any assistance is greatly appreciated.
Does anybody know if it is possible to add a Dashboard Studio Dashboard to the navingation in an App? When Adding a Dashboard Studio Dashboard to the navigation menu the dashboard does not display.
We currently have an report every morning that shows which users have been removed from a particular AD group from the previous day. The report sometimes shows too many events. I want to modify it ... See more...
We currently have an report every morning that shows which users have been removed from a particular AD group from the previous day. The report sometimes shows too many events. I want to modify it such that if a user has been removed from an AD group and added back in within one hour, then it would be ignored. Here are examples below. EventCode 4729 is a user getting removed and 4728 is a user getting added.   _time MemberSid AD_Group EventCode 2022-12-21 14:48:22 bob Executives 4728 2022-12-21 12:48:22 bob Executives 4729 This would show up in the morning report that bob was removed from the Executives group at 12:48 since its been over an hour since they were added back in.   _time MemberSid AD_Group EventCode 2022-12-21 14:38:22 janice Executives 4728 2022-12-21 13:00:22 bob Executives 4728 2022-12-21 12:55:22 dylan Executives 4729 2022-12-21 12:50:22 janice Executives 4729 2022-12-21 12:48:22 bob Executives 4729 Janice and Dylan would show up in the morning report in this case since its been over an hour that Janice was added back in and Dylan was never added back at all.   I'm not good with SPL and am having trouble with what command(s) to use so that I can achieve the above.  Below is the search I currently have. The comment indicates what I'm trying to do. index=oswinsec sourcetype="XmlWinEventLog" EventCode IN (4728,4729) Group_Name="Executives" | rename Group_Name as AD_Group | table _time, MemberSid, AD_Group, EventCode | sort by MemberSid ``` WHERE for a user, if there is eventcode 4729 and no eventcode 4728 following or eventcode 4728 over a hour later, then keep those events/results. In other words, ignore users with eventcode 4729 and eventcode 4728 within a hour apart.```  
Hello Splunkers,   I think I could be over thinking the search below. I am working on adding an earliest and latest time to the search, but I need to ensure that there are no duplicates being store... See more...
Hello Splunkers,   I think I could be over thinking the search below. I am working on adding an earliest and latest time to the search, but I need to ensure that there are no duplicates being stored in the lookup table. Anybody have any recommendations?   My first impression is that we could have a lookup table that could become very large over time. If we not not run the search over all-time, which we are trying not to do.   index=salesforce eventtype=sfdc_object sourcetype="sfdc:account" | eval object_type="Account" | rename Name AS object_name | sort 0 - _time | dedup Id | eval object_id= substr(Id, 1, len(Id)-3) | table LastModifiedDate, LastModifiedById, Id, object_id, object_name, object_type, AccountNumber | outputlookup lookup_sfdc_accounts.csv    
Hello, I am trying to extract the below 201 text highlighted in red below as one separate field from two separate events. How may I do this? I attempted the field extraction feature in Splunk but h... See more...
Hello, I am trying to extract the below 201 text highlighted in red below as one separate field from two separate events. How may I do this? I attempted the field extraction feature in Splunk but had no luck. Any assistance is appreciated! Event 1: 106.51.86.25 [22/Dec/2022:07:48:10 -0500] POST /services/public/v1/signup HTTP/1.1 201 5 539   Event 2: 23.197.194.86 - - [22/Dec/2022:07:48:09 -0500] "POST /services/public/v1/signup HTTP/1.1" 201 -
Hello, new to using splunk across a domain and I am attempting to get a query that details any domain user account changes. I want to pull change type, who changed the account, and date/time from /va... See more...
Hello, new to using splunk across a domain and I am attempting to get a query that details any domain user account changes. I want to pull change type, who changed the account, and date/time from /var/log/dirsrv logs . Any suggestions?
Splunk Enterprise 9.0.1 on premise, clustered search heads and indexers. DB Connect 3.7.0. We found out that every time the indexer cluster is restarted, some events are being duplicated in the i... See more...
Splunk Enterprise 9.0.1 on premise, clustered search heads and indexers. DB Connect 3.7.0. We found out that every time the indexer cluster is restarted, some events are being duplicated in the indexes around the time of the restart. There are some older threads discussing similar issues but they are using much older versions of the software. Any ideas on how to troubleshoot/debug/workaround this issue?
Hi Splunkers, I have a problem with the "Splunk Security Essentials" application. Currently, I have 34 activated correlation searches that I would like to map on the Mitre Framework. Viewing the ... See more...
Hi Splunkers, I have a problem with the "Splunk Security Essentials" application. Currently, I have 34 activated correlation searches that I would like to map on the Mitre Framework. Viewing the "sse_content_exported_lookup" file, the mitre information does not match the information reported in each correlation rule. Also, there are correlation searches in the "sse_content_exported_lookup" file that had the mitre but didn't appear in the Mitre Map. However, all 34 correlation searches show up in the bookmarks. Could you suggest a solution? is there any procedure I can follow to make sure that all active correlation searches appear in the mitre map?   Thank you.
Explanation of various http(s) related timeouts and impact.