All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

On Splunk, I have the following 2 searches: 1) `ABC_logs(traffic)` user != "unknown" src_ip IN (*) dest_ip IN (*) | stats values(src_ip) values(src_zone) values(dest_ip) values(dest_port) values(a... See more...
On Splunk, I have the following 2 searches: 1) `ABC_logs(traffic)` user != "unknown" src_ip IN (*) dest_ip IN (*) | stats values(src_ip) values(src_zone) values(dest_ip) values(dest_port) values(app) values(transport) values(session_end_reason) values(user) by host rule action | rename values(*) as * | dedup src_ip, dest_ip | rename src_ip as device_ip_address | fields device_ip_address, dest_ip, user, host [search index=DEF sourcetype = asset_registry_2 | dedup device_ip_address | fields device_ip_address, host ] AND 2) | inputlookup laso_lab_networks.csv | join type=inner LAB_ID [| inputlookup laso_lab_firewalls.csv | eval Firewall = split(Firewall, ",") | mvexpand Firewall] | dedup Firewall | eval Firewall = mvindex(split(Firewall, "."), 0) | fields LAB_ID, Lab_Mgr, Firewall I need to write a single query that would LINK both these queries such that the field "Firewall" from search 1 is joined to the field "host" from search 2.  I appreciate any help you can offer
In splunk indexer node the cachereasumblemanager fails with error state 7    05-23-2023 02:19:57.772 -0700 ERROR CacheManager - action=upload, cache_id="bid|_introspection~0~88365063-AB85-4F0E-A3... See more...
In splunk indexer node the cachereasumblemanager fails with error state 7    05-23-2023 02:19:57.772 -0700 ERROR CacheManager - action=upload, cache_id="bid|_introspection~0~88365063-AB85-4F0E-A305-D355133DA4E8|", status=failed, reason="HTTP Error 7: Permanent error in CreateResumableSession: {\n "error": {\n "code": 403,\n "message": "Access denied.",\n "errors": [\n {\n "message": "Access denied.",\n "domain": "global",\n "reason": "forbidden"\n }\n ]\n }\n}\n [PERMISSION_DENIED]", elapsed_ms=11 All the config has been set but fails with permission denied, Any sample would be helpful 
Hi All, We noticed that one of our Heavy Forwarder has not been sending _audit and _internal logs to our indexer. It is only sending _introspection and _telemetry logs to our Indexers. We need to h... See more...
Hi All, We noticed that one of our Heavy Forwarder has not been sending _audit and _internal logs to our indexer. It is only sending _introspection and _telemetry logs to our Indexers. We need to have_audit/_internal logs from this server as well. Can anybody help?  Telnet to the indexer works. Any help would be appreciated. Running | tstats count where host=MYHF by index,_time only shows _introspection and _telemetry index   Thanks, Neerav
Hello Splunkers,    i want to to extract a 10-digit path from a url but unfortunately i always get this error: Error in 'rex' command: The regex '.*\/(([0-9a-z]{10}))' does not extract anything... See more...
Hello Splunkers,    i want to to extract a 10-digit path from a url but unfortunately i always get this error: Error in 'rex' command: The regex '.*\/(([0-9a-z]{10}))' does not extract anything. It should specify at least one named group. Format: (?<name>...).   how ever, i want to extract the path from this URL  https://example.com/8a2a6063b3 this is the search i used   index=FP_proxy | rex field=url "http[s]?:\/\/[a-zA-Z0-9-]{1,}\..*\/(([0-9a-z]{10})?<url__path>)" your help is much needed and appreciated to fix this issue      
Hi, I'm looking for a way to raise an alert in Splunk when the number of errors for a given event exceeds a set threshold within a set period of time - for example >10 events in a 5 minute period (... See more...
Hi, I'm looking for a way to raise an alert in Splunk when the number of errors for a given event exceeds a set threshold within a set period of time - for example >10 events in a 5 minute period (alert raised). It must only be raised once. I then want another alert to be raised when that previous alert no longer applies (alert over) - for example there are now 0 events in a 5 minute period. This also must only be raised once. If, subsequently, the first alert is triggered again, then it will alert and so on, toggling between alert raised and alert over. Any help would be much appreciated as I can't find any info on how I might go about this. Thanks in advance. Phil Hamer
I have a table with 3 different csv files that I have to show, with different values. When I select the value that I want to filter and show, still getting all the values in the table. not only the ... See more...
I have a table with 3 different csv files that I have to show, with different values. When I select the value that I want to filter and show, still getting all the values in the table. not only the ones that I wanted to show. How can I do it?    
Hello! I am currently trying to create a dashboard (Splunk Enterprise Dashboard Beta) where I aggregate values from multiple repositories in a single value panel to get a total value. My issue is t... See more...
Hello! I am currently trying to create a dashboard (Splunk Enterprise Dashboard Beta) where I aggregate values from multiple repositories in a single value panel to get a total value. My issue is that I want to have a trendline of these aggregated values. Is there a way to achieve this? My current (non-trendline) query looks like this:     index="my-index" repo= * |chart latest(***.***.***.Summary) as "Summary" by repo | eval total = sum(Summary) | stats sum(Summary) as "Aggregated Summary"       Thanks in advance.
Hi all,   Hoping to get some clarity on the Solarwinds Add-on for Splunk.   I'm trying to install this onto my SHC (4SH's) with a deployer. Pushing from the deployer is all good. I then jump onto... See more...
Hi all,   Hoping to get some clarity on the Solarwinds Add-on for Splunk.   I'm trying to install this onto my SHC (4SH's) with a deployer. Pushing from the deployer is all good. I then jump onto a search head and create the inputs and account to talk to solarwinds and the SH's replicate those local changes throughout the cluster.   After that I get absolutely no data, no errors as well to say what's not working. I've tested this app on a separate Indexer on one site, another indexer on another site (different subnets) and on my deployer and it works instantly. As soon as I put it on my SHC via the deployer, it does not work at all.   Is this app suppose to work with a SHC or am I doing something completely wrong? 
I have a query where I am using three joins to combine data from lookup , index and summary index.Also I am running the search alltime and for all hosts(158)....if I run for 10 hosts at a time I am g... See more...
I have a query where I am using three joins to combine data from lookup , index and summary index.Also I am running the search alltime and for all hosts(158)....if I run for 10 hosts at a time I am getting the correct results but anything more than 10 I am getting the values as 0.   Is there a limit on splunk results OR  is there a setting I need to make changes or a .conf file.Below is the search I am running   | inputlookup event_time.csv | table host _time |rename host as orig_host | eval Latest_Event_Time=strftime(Latest_Event_Time,"%Y-%m-%d %H:%M:%S") | join orig_host max=0 [ search index=summary | table _time orig_host Boot_Time PCIe_Bus_Id MODEL PN BOARDPN SN VBIOS INFOROM SMs L2SLICES | eval GPU = "SN: ".SN." PN: ".PN." BOARDPN: ".BOARDPN | sort 0 +_time | streamstats first(Boot_Time) as Start_Time last(Boot_Time) as Boot_Time by SN GPU orig_host | table SN GPU orig_host Start_Time Boot_Time | stats last(Boot_Time) as Boot_Time by SN GPU orig_host Start_Time |replace "" WITH "remove" IN Start_Time |replace "" WITH "remove" IN Boot_Time |search Start_Time!=remove |search Boot_Time!=remove ] | join orig_host Boot_Time type=left [ search index=summary summary_type=preos_fru_boot_time earliest=0 | sort 0 orig_host +_time | table orig_host _time Boot_Time Last_Boot_Time | rename Boot_Time as Next_Boot_Time Last_Boot_Time as Boot_Time ] | sort PRODUCTSN Start_Time | reverse | autoregress Start_Time as Next_Host_Start_Time | autoregress SN as Next_SN | reverse | eval Next_Host_Start_Time = if (SN != Next_SN, "", Next_Host_Start_Time) | eval Next_Host_Start_Time = strptime(Next_Host_Start_Time, "%Y-%m-%d %H:%M:%S") | eval Next_Boot_Time = strptime(Next_Boot_Time, "%Y-%m-%d %H:%M:%S") | eval Latest_Event_Time = strptime(Latest_Event_Time, "%Y-%m-%d %H:%M:%S") | eval Estimated_End_Time = min(Latest_Event_Time, Next_Boot_Time, Next_Host_Start_Time) | eval Estimated_End_Time = strftime(Estimated_End_Time, "%Y-%m-%d %H:%M:%S") | eval Next_Host_Start_Time = strftime(Next_Host_Start_Time, "%Y-%m-%d %H:%M:%S") | eval Next_Boot_Time = strftime(Next_Boot_Time, "%Y-%m-%d %H:%M:%S") | eval Latest_Event_Time = strftime(Latest_Event_Time, "%Y-%m-%d %H:%M:%S") | eval search_end_time = if("" = "", now(), "") | eval Estimated_End_Time = strptime(Estimated_End_Time, "%Y-%m-%d %H:%M:%S") | eval Estimated_End_Time = min(search_end_time, Estimated_End_Time) | eval etime = strptime(Start_Time, "%Y-%m-%d %H:%M:%S") | eval ltime = Estimated_End_Time | eval Estimated_Installed_Time = ltime - etime | eval Estimated_Installed_Time = round(Estimated_Installed_Time,0) | eval Estimated_End_Time = strftime(Estimated_End_Time, "%Y-%m-%d %H:%M:%S") | eval job_days = 0 | table SN GPU orig_host Start_Time Estimated_End_Time Estimated_Installed_Time job_days | sort SN Start_Time | join orig_host max=0 type=left [ search index=jobs | convert dur2sec(start_to_end_time) as diff_in_seconds | eval job_hours=round((diff_in_seconds)/60/60,1) | stats sum(job_hours) as total_job_hours by orig_host time_start, time_end, start_to_end_time, partition, submit_line, account, user, job_id | convert dur2sec(start_to_end_time) as diff_in_seconds | eval job_days=round((diff_in_seconds)/60/60/24,6) | fields orig_host time_start time_end start_to_end_time job_days ] | eval job_start_time = strptime(time_start, "%Y-%m-%d %H:%M:%S") | eval job_end_time = strptime(time_end, "%Y-%m-%d %H:%M:%S") | eval Start_Time = strptime(Start_Time, "%Y-%m-%d %H:%M:%S") | eval Estimated_End_Time = strptime(Estimated_End_Time, "%Y-%m-%d %H:%M:%S") | eval diff = job_start_time - Start_Time | eval diff2 = Estimated_End_Time - job_end_time | search (diff > 0 AND diff > 0) OR job_days = 0 | eval Start_Time = strftime(Start_Time, "%Y-%m-%d %H:%M:%S") | eval Estimated_End_Time = strftime(Estimated_End_Time, "%Y-%m-%d %H:%M:%S") | fields SN GPU orig_host Start_Time Estimated_End_Time Estimated_Installed_Time time_start time_end start_to_end_time | convert dur2sec(start_to_end_time) as start_to_end_time | stats sum(start_to_end_time) as Job_Runtime by SN GPU orig_host Start_Time Estimated_End_Time Estimated_Installed_Time | eval Job_Runtime = if (Estimated_Installed_Time<Job_Runtime, Estimated_Installed_Time, Job_Runtime) | eval Estimated_Installed_Time=tostring(Estimated_Installed_Time, "duration") | eval Job_Runtime=tostring(Job_Runtime, "duration") | fillnull value="00:00:00" Job_Runtime | sort SN Start_Time    
We are planning to procure several software platforms (e.g. Workday, Salesforce, Contact Center CX etc) in the near future as part of an expansion effort. I'm trying to get ahead and understand what ... See more...
We are planning to procure several software platforms (e.g. Workday, Salesforce, Contact Center CX etc) in the near future as part of an expansion effort. I'm trying to get ahead and understand what the raw logs for each of these various platforms will look like so that our SOC Team can start producing pseudo SPL queries that we can eventually incorporate into Splunk for reporting and alerting. Is there any online repo that has sample raw logs from such platforms (preferably from their sandbox environment) that we could upload as flat files to Splunk and start experimenting with (e.g. parsing, transforming, etc)?
How to I convert the following time to 2023-05-18 08:11:52 2023-05-18T08:11:52.000-07:00  
There are few events already indexed the sensitive info in Splunk SaaS cloud. how to mask those sensitive data in the events. or events delete option to restrict user not to view those events. Please... See more...
There are few events already indexed the sensitive info in Splunk SaaS cloud. how to mask those sensitive data in the events. or events delete option to restrict user not to view those events. Please advise. Splunk SaaS cloud. Example event:  {"@timestamp":"2023-05-14T15:32:07.449Z", "log.level": "Msg", "labels.message":"system Authenticate customer :dda9c1ad-2df1-ehetiokhnergiwre453644836-sgnreg346893 PAN1 123456789 request AuthVerificationRequest(authRequest=null, accountNumber=null, pan=123456789, pin=0987654321pcw/5, txnsubtypcde=Full_pan_Lookup)" , "ecs.version": "1.2.0","process.thread.name":"http-nio-8080-exec-150","log.logger":"com","labels.appId":"IVR-BEE","labels.appVersion":"1","labels.channelId":"ACIVR","labels.correlationId":"ujb6678879-2df1-4a32-83e5-4198dbcfe913","labels.flow-id":"AUTH-PIN","labels.interactionId":"5646540-8a2c-4893kr-58434","labels.tenantId":"lll","labels.url":"/digital-auth/engineering/authenticateuser/v1/auth"}
How to restrict access for a Splunk admin role from being able to run index="*" search.  This is killing our Splunk resources and need to restrict the access.  Apart from User awareness / Education, ... See more...
How to restrict access for a Splunk admin role from being able to run index="*" search.  This is killing our Splunk resources and need to restrict the access.  Apart from User awareness / Education, is there any way to implement this ?
I have index with json data that represents call data (phone calls), but there is nothing native in the index that represents lists. Assume, for example, I have a list called "Splunk Legends" and i... See more...
I have index with json data that represents call data (phone calls), but there is nothing native in the index that represents lists. Assume, for example, I have a list called "Splunk Legends" and it represents 10 numbers. Currently if I want to 'count all calls last 30d from Splunk Legends, then I do     index=mydata | stats value(*) by guid -- because I'm joining some other interesting information in this index | lookup mylookup.csv number OUTPUT list | search list="Splunk Legends"     What I dislike about this, of course, is I have to search the entire index. Thoughts on a better way to match value against an external data source? Thank you!    
Hello, Team! I see delays in the receipt of events in the indexes. Events are collected by SplunkForwarder agents. In the case of a complete absence of events, restarting agents helps, but if there... See more...
Hello, Team! I see delays in the receipt of events in the indexes. Events are collected by SplunkForwarder agents. In the case of a complete absence of events, restarting agents helps, but if there is a delay in the arrival of events, restarting agents does not help. What can help? 
I'm trying to get an allowed listing together for analytics.api.appdynamics.com.  We cannot use the DNS name and every document I have read says to look in the AWS ip address listing for the region t... See more...
I'm trying to get an allowed listing together for analytics.api.appdynamics.com.  We cannot use the DNS name and every document I have read says to look in the AWS ip address listing for the region that the DNS name is in.  This is US-west-2 and when I look for just the 52. & 54. Ip address ranges, this turns out to be 120+ and some of these listed are /14s.  Surely there must be some documentation that ties this down tighter as it seems to span high numbers of IP addresses, many I would think aren't used by AppD
I'm not so keen in creating Splunk query, so I would kindly ask your support for the following query: I need to have a list with all different versions of an application currently installed on all... See more...
I'm not so keen in creating Splunk query, so I would kindly ask your support for the following query: I need to have a list with all different versions of an application currently installed on all PC's and related username, hostname and src ip. My draft query is the following: index=* sourcetype=* "pippo app" | stats list(file_name) as "application" by src_user | table username, hostname, src_ip, application with version Thank you in advance.
Hi Team, Kindly check with below logs [19-May-2023 06:15:55.341][INFO] abc@abc.com@ABC-CB-NOC, 1.1.1.1:61, create, user:Test, appliances: ALR-prod1 , changeset:devices { device{ALR-prod1} { confi... See more...
Hi Team, Kindly check with below logs [19-May-2023 06:15:55.341][INFO] abc@abc.com@ABC-CB-NOC, 1.1.1.1:61, create, user:Test, appliances: ALR-prod1 , changeset:devices { device{ALR-prod1} { config { orgs { org-services{ATCHYUTH-NOC} { user-identification { local-database { users { + user{Test} { + email-address test@abc.com + passwd + status CREATED} } } } } } } } } [28-Jun-2022 08:35:48.010][INFO] abc@abc.com@ABC-CB-NOC, 1.1.1.1:61, create, authentication-method:Dummy-Auth, template:Access-Template , changeset:devices { template{Access-Template} { config { orgs { org-services{Atchyuth-NOC} { user-identification { authentication-methods { + authentication-method{Dummy-Auth} { + method { + local }} } } } } } } } [28-Sep-2020 12:13:07.137][INFO] abc@abc.com@ABC-CB-NOC, CLI 'set devices template ABC config orgs org-services ATCHYUTH-NOC security captive-portal url abc.net' [28-Jun-2022 08:35:48.010][INFO] abc@abc.com@ABC-CB-NOC, 1.1.1.1:53, create, authentication-method:Dummy-Auth, template:Access-Template, changeset:devices { template{Access-Template} { config { orgs { org-services{Atchyuth} { user-identification { authentication-methods { + authentication-method{Dummy-Auth} { + method { + local }} } } } } } } } [28-Sep-2020 12:13:07.137][INFO] abc@abc.com@ABC-CB-NOC, CLI 'set devices template ABC config orgs org-services ATCHYUTH security portal url abc.net' Based on above logs we have one common field org-services. Depending on org-services need to create new field and extract the values to the field. Kindly help me in props.conf and transforms.conf and also let me know the regex pattern that will create a new field based on org-services.  
I would like to construct a query by comparing 2 fields (hash values) in 2 different indexes.(A&B)......... If Hash Value in Index A matches Hash value in Index B then ignore those  results else ret... See more...
I would like to construct a query by comparing 2 fields (hash values) in 2 different indexes.(A&B)......... If Hash Value in Index A matches Hash value in Index B then ignore those  results else return results if hashes not matching     
Hi Splunkers! Any one able to assist me with a search that I am trying to create below.  I want to extract some data from multiple json data value fields. I am able to ingest the json data an... See more...
Hi Splunkers! Any one able to assist me with a search that I am trying to create below.  I want to extract some data from multiple json data value fields. I am able to ingest the json data and extract the fields and values correctly.  Sample data below  { "key1": "value1", "key2": "[field1 = field_value1] [field2 = field_value2] [field3 = field_value3] [field4 = field_value4]", "key3": "value3", "key4": "[field5 = field_value5] [field6 = field_value6] " } I am trying to create the proper search to extract all of the fields and their respective field_values from the sample data above. Thanks in advance!