All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Hello, How can I extract multiple values from a string after each slash? For example below, I would like to extract field1 with the value "Subscription", field2 with the value "83C4EEEF-XXOA-1234... See more...
  Hello, How can I extract multiple values from a string after each slash? For example below, I would like to extract field1 with the value "Subscription", field2 with the value "83C4EEEF-XXOA-1234" and so on. /SUBSCRIPTIONS/83C4EEEF-XXOA-1234/VIRTUALGROUPS/JOHN.DOE/PROVIDERS/MICROSOFT.GRAPH/DISKENCRYPTIONSETS/JOHN.DOE-TBHOST-DWS Thank you.
Hi Team, I would like to compare below 5 different columns and get one more column as a count. category code  text  country  org abc           100      Adv    US          12 abc            100   ... See more...
Hi Team, I would like to compare below 5 different columns and get one more column as a count. category code  text  country  org abc           100      Adv    US          12 abc            100     Adv    US         12 abc             100     Agh    Eu           13 abc             100     Agh    Eu           13 Column count should have have the number of times of occurrence of the below, say first 2 entries are occurring 2 time so it should display the output as category code  text  country  org   Count abc           100      Adv    US          12       2 kindly help with the query to achieve this.
Greetings!! Dear all! Hope you are well.   I need your support on how to calculate the size of events we received per day, for instance, if you want to check the size of each data we have receive... See more...
Greetings!! Dear all! Hope you are well.   I need your support on how to calculate the size of events we received per day, for instance, if you want to check the size of each data we have received in one week? I am using Splunk enterprise (Linux server) Thank you in advance!    
I have a layered network with the bulk of the Splunk infrastructure in Zone 1 (Indexer, Collector, Search Head) Within this zone, I'm using split UDP ports to direct specific syslog traffic to appro... See more...
I have a layered network with the bulk of the Splunk infrastructure in Zone 1 (Indexer, Collector, Search Head) Within this zone, I'm using split UDP ports to direct specific syslog traffic to appropriate indexes for example:   Palo Alto Syslog : Indexer IP: UDP/5140 which ends up in Palo Alto Index Cisco Cisco: Indexer IP: UDP/5141 which ends up in Cisco Index iDRAC / iLO: Indexer IP: UDP/5142 which ends up in iDRAC/iLO index. etc....   In other network zones, Zone 2, 3, 4 etc I have a Heavy Forwarder which allows for devices in these zones to funnel their traffic via the HF to the Indexer (Only communication across firewall between zones are HF to Indexer and no other direct communication from hosts in Zones to Indexer in Zone 1 allowed)   Should I configure my HF to also have the same data inputs (ports 5140, 5141, 5142 etc) and also configure devices in these Zones to point to those ports, or should devices in Zones 2,3,4 simply send to UDP 514 and when it reaches the indexer it will be split up appropriately?   I'm guessing the data inputs should look the same on all HFs as in the Indexer. Can anyone confirm?
Hi Everyone, Can someone guide me how to extract the filed from raw data.(The field highlighted in bold) 2021-05-03T20:34:46.574469127Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_co... See more...
Hi Everyone, Can someone guide me how to extract the filed from raw data.(The field highlighted in bold) 2021-05-03T20:34:46.574469127Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_container=blazegqlgway-a pod_name=blazegqlgway-a-deployment-11-5sk6b stream=stdout message=2021-05-03 13:34:46.574 INFO [dgfgateway,c6e3e9be5ff5499a,c6e3e9be5ff5499a,true] 1 --- [nio-8443-exec-7] c.a.s.g.s.h.ResponseRetrieverService : nodeUrl=https://abc/graphql, caller=200000949GCPSfdcCommerical, nodeHttpStatus=200, nodeResponseTime=691 2021-05-03T10:04:33.485822671Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_container=blazegqlgway-a pod_name=blazegqlgway-a-deployment-11-5sk6b stream=stdout message=2021-05-03 03:04:33.485 INFO [dgfgateway,68cdbc43702536b4,68cdbc43702536b4,true] 1 --- [nio-8443-exec-7] c.a.s.g.s.h.ResponseRetrieverService : nodeUrl=https://jkl/graphql, caller=200000949GCPSfdcCommerical, nodeHttpStatus=200, nodeResponseTime=615 Thanks in advance
Hi Everyone, Can someone guide me how can I extract the below field highlighted in bold: 2021-05-04T05:01:03.702620566Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_container=blazegql... See more...
Hi Everyone, Can someone guide me how can I extract the below field highlighted in bold: 2021-05-04T05:01:03.702620566Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_container=blazegqlgway-a pod_name=blazegqlgway-a-deployment-11-5sk6b stream=stdout message=2021-05-03 22:01:03.702 INFO [dgfgateway,264799cd7c73ee07,264799cd7c73ee07,true] 1 --- [nio-8443-exec-6] c.a.s.g.s.h.ResponseRetrieverService : nodeUrl=https://abc/graphql, caller=200005348C360VIEW, nodeHttpStatus=200, nodeResponseTime=1163   2021-05-03T21:44:45.4034061Z app_name=blazegqlgway-a environment=e2 ns=blazegateway pod_container=blazegqlgway-a pod_name=blazegqlgway-a-deployment-11-5sk6b stream=stdout message=2021-05-03 14:44:45.402 INFO [dgfgateway,daccee3618879e78,daccee3618879e78,true] 1 --- [nio-8443-exec-8] c.a.s.g.s.h.ResponseRetrieverService : nodeUrl=https://abc/graphql, caller=200000949GCPSfdcCommerical, nodeHttpStatus=200, nodeResponseTime=649
I am getting the below error while applying the shcluster changes to sh custers     
For Syslog, Splunk recommends using a dedicated syslog server. So, for Netflow data, is there any particular best practices for ingesting into Splunk ? Can I still continue using syslog server even... See more...
For Syslog, Splunk recommends using a dedicated syslog server. So, for Netflow data, is there any particular best practices for ingesting into Splunk ? Can I still continue using syslog server even for Netflow data or use Splunk stream app ? Please advise. Thanks.
Hi All, I'm new to Splunk administration and have been tasked with upgrading our 8.0.3 instance to 8.1.3. We have 1 indexer and 1 search head. Which do i upgrade first ? All windows based. thanks.
Our Application does a nightly re-index on node 1, once thats complete, the index build is copied to 6 other nodes,  Each of the other nodes then restore the files. These entries are noted as "Index ... See more...
Our Application does a nightly re-index on node 1, once thats complete, the index build is copied to 6 other nodes,  Each of the other nodes then restore the files. These entries are noted as "Index restore started." and "Index restore complete." in the application logs. I would like to have a dashboard panel that shows how long it takes from "started" to "complete" on each host to be able to see trends for this over time. How to go about this?
  I have 3 machines with 32-bit windows 2003 but I can't find an agent (Universal Forwarder) What I can do?    
Hey Splunk friends,  Very new customers to splunk.  Trying to find an easy way to create JIRA tickets from noteable events in ES.  Installed "JIRA Service Desk simple addon" v 1.0.26.  When we try t... See more...
Hey Splunk friends,  Very new customers to splunk.  Trying to find an easy way to create JIRA tickets from noteable events in ES.  Installed "JIRA Service Desk simple addon" v 1.0.26.  When we try to create a ticket, we get an error "Adaptive Response Action Cannot Be Dispatched, Unexpected Token < in JSON at position 0".   Basic connectivity seems to work, the app is able to read info from our JIRA instance and populate some dropdown menus.  JIRA is hosted in the cloud, we're also on Splunk Cloud 8.1   Any ideas on where to troubleshoot? Thanks!
I am ingesting 100 windows machines and the events that are affecting my license consumption the most are 5156,5157,5158, 4658,4663, 4656, 4690. I don't really know if I should filter them or if I c... See more...
I am ingesting 100 windows machines and the events that are affecting my license consumption the most are 5156,5157,5158, 4658,4663, 4656, 4690. I don't really know if I should filter them or if I can get out some correlation event that is valuable. I have already filtered the first 2 according to Splunk documentation. But my client doesn't want me to filter the EventCode if not the "Application Name" What I see differently is that "EventCode" is pasted and "Aplication Name" has a blank space and I don't know how I should put the regular expression if I want to filter only by "Aplication name"   for example Application Name: \device\harddiskvolume2\windows\system32\svchost.exe props.conf [WinEventLog:Security] TRANSFORMS-wmi=wminull transforms.conf [wminull] REGEX=(?m)^EventCode=(592|593) DEST_KEY=queue FORMAT=nullQueue https://docs.splunk.com/Documentation/Splunk/6.6.2/Forwarding/Routeandfilterdatad  
Hi,   I am trying to update a app in our splunk environment, when i click on "install app from file" it gives a 500 error "The server encountered an unexpected condition which prevented it from fu... See more...
Hi,   I am trying to update a app in our splunk environment, when i click on "install app from file" it gives a 500 error "The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage." i tried to update the app directly but still the same issue. Do we need to check on any settings?
I am trying to find the average time duration in hh:mm from the data in one column. Below is the search query which gives me data as below and I want the average time duration in hh:mm like average t... See more...
I am trying to find the average time duration in hh:mm from the data in one column. Below is the search query which gives me data as below and I want the average time duration in hh:mm like average time duration of a column is 01:22 or whatever the value is. I tried looking for the same articles but nothing seem to work. Any help would be greatly appreciated. Thank you search month="Apr,2021" | stats count by "TotalTimeTaken (hh:MM)" | "TotalTimeTaken (hh:MM)" 00:24 01:44 02:23 00:54
Hi,  All my URLs have this general format https://value.company.com.au/etc/ Is there a way I can extract URLs and always stop at the .au but also have this included in the field? Some differ with a... See more...
Hi,  All my URLs have this general format https://value.company.com.au/etc/ Is there a way I can extract URLs and always stop at the .au but also have this included in the field? Some differ with a port at the end so its goes https://value.company.com.au:9001 but I don't want the port or anything after the /. Do you have any recommendations on what the regex would look like?
Right now we are sending logs to Nagios log server from our Kubernetes nodes, we would like to forward the logs to both Nagios and Splunk log servers during this migration so our migration team can v... See more...
Right now we are sending logs to Nagios log server from our Kubernetes nodes, we would like to forward the logs to both Nagios and Splunk log servers during this migration so our migration team can validate the changes parallelly. We will have to validate the logs, and recreate the dashboards and alerts that we already have in Nagios in Spunk. We do not want to lose the Nagios monitoring and alerting until we can confirm everything is migrated in Splunk. So is it possible to send the logs from our kubernetes daemonsets to both Nagios and Splunk log servers at the same time ?
I have installed CentOS 7 on a EC2 server and on CentOS 7 Installed splunk and universal forwarding.  Now I need help with how to store client ssh login and logoff record?.
I'm trying to use a case statement and assign part of a field for each case statement. For example case(len(field)=5, regex that takes the first 3 characters of field, len(field=7), regex takes the f... See more...
I'm trying to use a case statement and assign part of a field for each case statement. For example case(len(field)=5, regex that takes the first 3 characters of field, len(field=7), regex takes the first 5 characters,...)
Hello Splunker, Since I am not computer science major, I have a hard time with Regex. I have fields value with lxw0000.usr.osd.mil, amico0000, alsedx.osd.mil and so much more with variation. How c... See more...
Hello Splunker, Since I am not computer science major, I have a hard time with Regex. I have fields value with lxw0000.usr.osd.mil, amico0000, alsedx.osd.mil and so much more with variation. How can I extract value before the first period? for example just Server lxw0000 amico0000 alsedx when I use the  (?<server>.*).\  , it does not show the amico0000. Your quick help will be appreciated. Thanks.