All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, When analyzing web traffic logs, at times the url field does not have a http_referrer field.  We are interested in finding out which URL did the original request came from ?  There is looping ... See more...
Hello, When analyzing web traffic logs, at times the url field does not have a http_referrer field.  We are interested in finding out which URL did the original request came from ?  There is looping involved.  THis is similar to the post:  https://community.splunk.com/t5/Getting-Data-In/Loop-through-URL-and-http-referrer-to-find-original-request/m-p/138817#M28507 In the above post, user makes use of a script which I cannot use in my environment.  How to then use the MAP command or any other command to recursively/loop thru the URL field and find out which was the original domain ? For example: index=firewall url =malicious-domain.com Actual flow of traffic: abc.com  >>> bcd.com  >>  No Http_Referrer field  >> malicious-domain.com  ( http_referrer is <empty>)  Expected result: abc.com
Hi, While trying to configure the Instights app for splunk in my search head as I'm trying to send the alerts to test index, it is not populating in the app other than these showed indexes.. Th... See more...
Hi, While trying to configure the Instights app for splunk in my search head as I'm trying to send the alerts to test index, it is not populating in the app other than these showed indexes.. Thanks.  
I have an input of value is like an odometer so it's cumulative. I collect a sample every 15 minutes. If I want to create a timechart that shows the total value of 15 min duration. how would I do tha... See more...
I have an input of value is like an odometer so it's cumulative. I collect a sample every 15 minutes. If I want to create a timechart that shows the total value of 15 min duration. how would I do that? See example below. 1/17/2023 0:01:00 value 6 1/17/2023 0:02:00 value 6 1/17/2023 0:03:00 value 6 1/17/2023 0:09:00 value 7 1/17/2023 0:10:00 value 6 1/17/2023 0:11:00 value 7 1/17/2023 0:12:00 value 8 1/17/2023 0:15:00 value 8 from 1 minute to 15 minute total value is 54 1/17/2023 0:16:00 value 5 1/17/2023 0:17:00 value 8 1/17/2023 0:18:00 value 5 1/17/2023 0:29:00 value 7 1/17/2023 0:30:00 value 5 from 16 minute to 30 minute total value is 30
Hi all, We have an issue with our environment (all running 8.2.8 currently on Windows Server 2016 Standard). We have a multisite Indexer cluster consisting of 2 Sites with 2 Indexers in each site an... See more...
Hi all, We have an issue with our environment (all running 8.2.8 currently on Windows Server 2016 Standard). We have a multisite Indexer cluster consisting of 2 Sites with 2 Indexers in each site and a separate 2 node Searchhead cluster, 1 Indexer Master node. Replication is working as expected and when manually taking nodes offline, search and data durability are effected as desired (for ex. if you take a whole site or a single node in a site offline, everything is still searchable). When a configuration bundle is deployed via the Master node, which requires a restart, all indexers in both sites will restart at the same time interrupting all searches. The following values are present in server.conf (using btool) on the Master node: [clustering] percent_peers_to_restart = 10 restart_timeout = 60 rolling_restart = restart rolling_restart_condition = batch_adding replication_factor = 2 site_replication_factor = origin:1,total:2 site_search_factor = origin:1,total:2 On top of that and also unpleasant is that in many cases for ex. when appyling changes to props.conf for existing stanzas via the Master node, the indexers will restart although bundle validation on the Master returned that a restart is not required. According to forum posts the issue should have been fixed in the 6.5.2 release. This environment however was base installed with 7.x so it cannot be an issue which would have been carried along through upgrades. Any thoughts appreciated. Many thanks and regards    
Hello, I am new to Splunk and have testing purpose. I have installed Splunk within 30 days and installed the ITE content pack recently. I got below error when searching. I found that the document... See more...
Hello, I am new to Splunk and have testing purpose. I have installed Splunk within 30 days and installed the ITE content pack recently. I got below error when searching. I found that the document said that searching will be disabled when there are five or more alerts. But I don't know why my usage is full? Error in Search: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. Licensing Alert: slave had no matching license pool for the data it indexed Licensing Alert from CLI: 4565f4ac328ca2cebf5d54342bd63c99 category:orphan_peer create_time:1673625600 description:slave had no matching license pool for the data it indexed peer_id:FDE1BB3F-01D3-4B27-B0FB-19AAE1CD27A0 severity:WARN slave_id:FDE1BB3F-01D3-4B27-B0FB-19AAE1CD27A0 Appreciate for any help.
Hi Team, I have deleted few old data using "delete" command in Splunk. But i like to know whether any way to check if the data events are deleted from the indexer buckets.   Thanks in advance!
Hi Community, how to route data with props and transforms over multiple HF? Source A to Data Collector > IDX Cluster A | (Data Copy A) | |---> Source B to Data Collector > IDX Cluster A/B ... See more...
Hi Community, how to route data with props and transforms over multiple HF? Source A to Data Collector > IDX Cluster A | (Data Copy A) | |---> Source B to Data Collector > IDX Cluster A/B Currently, the routing only works directly to IDX Cluster A/B, but not via Source B HF Please Help - Markus
Hi,   I have an application(test.app) which invokes multiple downstream application apis(profile, payments etc) and we log elapsed time of every downstream call as an element of Json Array. Is it... See more...
Hi,   I have an application(test.app) which invokes multiple downstream application apis(profile, payments etc) and we log elapsed time of every downstream call as an element of Json Array. Is it possible to plot timechart on the p99(elapsed) time of each downstream application call separately.   Sample log :      { ...... appName : test.app downstreamStats : [ { ... pathname : profile, elapsed: 250, ... }, { ... pathname : payments, elapsed: 850, ... } ] ...... }       I want to plot timechart of the above logs with p99 of elapsed time BY pathname
Any suggestions on how to rename fields and keep those fields in their stated table order. I have a bunch of fields that are attributes that are named is_XXX. I want all those fields to be on the r... See more...
Any suggestions on how to rename fields and keep those fields in their stated table order. I have a bunch of fields that are attributes that are named is_XXX. I want all those fields to be on the right hand side of the table, so if I do     <search> | foreach is_* [ eval "zz_<<MATCHSTR>>"=if(<<FIELD>>=1,"","")] | fields - is_* | table entity entity_type *     it works nicely and puts the first two named fields as the first two columns, then other fields then all the zz_* fields.    However, as soon as I add     | rename zz_* as *     it changes the order and sorts all the columns (apart from the first named two) into alphabetical order. Any specifically named fields I add after entity_type persist the column order but all fields output as a result of the wildcard lose their order after the rename.    
Hi, i have 2 hosts and 2 sources, previously i was getting data from 2 sources, but now we have teardown and resdeployed existing servers. Due to that the host ips are same but host names got chang... See more...
Hi, i have 2 hosts and 2 sources, previously i was getting data from 2 sources, but now we have teardown and resdeployed existing servers. Due to that the host ips are same but host names got changed. but now i am getting data from 2 hosts but from only one source not getting data from 2nd source. how to troubleshoot this issue??
hello, trying to capture DNS log traffic from an Active Directory Domain Controller. the topology is this: cloud splunk instance, heavy forwarder on my LAN and universal forwarder on the DC. i  s... See more...
hello, trying to capture DNS log traffic from an Active Directory Domain Controller. the topology is this: cloud splunk instance, heavy forwarder on my LAN and universal forwarder on the DC. i  see multiple Stream apps in the splunk store - which app goes where? There is "Splunk App for Stream" then there's "Splunk Add-on for Stream Forwarders" then there's something called "Splunk Add-on for Stream Wire Data" - can you please help?
Splunk UF Hi folks, Seeking help I am new to splunk I am trying to configure splunk UF, I have two vm's both vm's installed windows 10 however both vm's are communicating with each other in one VM... See more...
Splunk UF Hi folks, Seeking help I am new to splunk I am trying to configure splunk UF, I have two vm's both vm's installed windows 10 however both vm's are communicating with each other in one VM I installed Splunk enterprise and in another VM installed Splunk UF.. Assuming splunk enterprise VM is receiver and splunk UF VM is forwarder so I assigned splunk UF VM IP into splunk enterprise VM as a forwader with port 9997 ex: xx.xx.xxx:9997 Still not receiving any logs from UF vm I would like to know that procedure I am doing is it correct Would be appreciate your kind support Thanks in advance..
Hi, My query:   |tstats count where index=app-clietapp host=ahnbghjk OR host=ncsjnjsnjn sourcetype=app-clientapp source=/opt/splunk/var/clientapp/application.log by PREFIX(status:) |rename statu... See more...
Hi, My query:   |tstats count where index=app-clietapp host=ahnbghjk OR host=ncsjnjsnjn sourcetype=app-clientapp source=/opt/splunk/var/clientapp/application.log by PREFIX(status:) |rename status: as App_Status |where isnotnull(App_Status) |eval Sucess=if(App_Status="0" OR App_Status="", "Succ", null()) |eval Error=if(App_Status!="0", "Error", null())   output: App_Status count   Error Sucess 0 767890   Succ 6789 65 Error     But i want the output as shown below: App_Status Error Sucess 6789 65 767890   please let me know how to modify the query so that i can get the required output.
Hi Community, I am testing Splunk dashboard performance with Selenium IDE. I am able to get the desired results but the only problem I am facing is with Splunk logging out to complete the testing... See more...
Hi Community, I am testing Splunk dashboard performance with Selenium IDE. I am able to get the desired results but the only problem I am facing is with Splunk logging out to complete the testing. Is there some point of contact who can help me with the issue or some other ideas would also work if it helps me with Splunk logout from external applications? Regards, Pravin
Hi Friends,  I am upgrading my Splunk Enterprise from 7.1 to 8.1. after upgrading Indexer search head Peer server it is not reflecting under indexer cluster group. there i can see only Master node.... See more...
Hi Friends,  I am upgrading my Splunk Enterprise from 7.1 to 8.1. after upgrading Indexer search head Peer server it is not reflecting under indexer cluster group. there i can see only Master node. do i need to follow any certain steps to upgrade such clusters ?
Environment - single splunk enterprise instance (v. 8.2.6) running on a RHEL 6.1 server, receiving data from  multiple forwarders. Issue - License volume has always shown as 30GB/day, for the past ... See more...
Environment - single splunk enterprise instance (v. 8.2.6) running on a RHEL 6.1 server, receiving data from  multiple forwarders. Issue - License volume has always shown as 30GB/day, for the past few years anyway. Found out today that the last license purchased (January 2022) was for 50GB/Day, but the license page is still showing 30G/day. How do I get the correct volume showing for our licensing? I would have thought it would have been automatic, or part of the license install process.
Hi all, I have to extract sourcetype as field in Dashboard. There are multiple sourcetype like  : oracle:audit:json, oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcety... See more...
Hi all, I have to extract sourcetype as field in Dashboard. There are multiple sourcetype like  : oracle:audit:json, oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcetype=oracle:audit:sql12 I have written regex : rex mode=sed field=sourcetype "s/oracle:audit:(.*)\d\d/\1/g" Its working fine for all the sourcetype :  oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcetype=oracle:audit:sql12 But when the data is coming with oracle:audit:json, its not giving the result in Dashboard. Main search query is giving result. macros definition : definition = (sourcetype=oracle:audit:json OR sourcetype=oracle:audit:json11 OR sourcetype=oracle:audit:json12 OR sourcetype=oracle:audit:sysaud OR sourcetype=oracle:audit:sysaud11 OR sourcetype=oracle:audit:sysaud12 OR sourcetype=oracle:audit:sql11 OR sourcetype=oracle:audit:sql12) I have written macros also where i have passed all the sourcetype but getting no result or partial result in dashboard for sourcetype  oracle:audit:json  
I just came to the realization that this query shows "missing" when it's either missing in Splunk or exists in Splunk but not in the export: index=_internal | fields host | dedup host | eval ... See more...
I just came to the realization that this query shows "missing" when it's either missing in Splunk or exists in Splunk but not in the export: index=_internal | fields host | dedup host | eval host=lower(host) | append [ | inputlookup Export.csv | rename Hostname as host | eval host=lower(host)] | stats count by host | eval count=count-1 | eval Status=if(count=0,"Missing","OK") | sort Status | table host Status What I would like is to change the query to show where it's missing.
Hi all, is there a limitation in the combination of transforms on a source in props.conf? here is what i did and somehow I don't get any result. Whenever I delete the TRANSFORMS-reroute entry, ... See more...
Hi all, is there a limitation in the combination of transforms on a source in props.conf? here is what i did and somehow I don't get any result. Whenever I delete the TRANSFORMS-reroute entry, data is received and hostnames are changed. Somehow I don't get the source with my regex rerouted to another index. props.conf:   [source::tcp:514] TRUNCATE = 64000 TRANSFORMS = newhost1 TRANSFORMS = newhost2 TRANSFORMS-reroute=set-index   transforms.conf   [newhost1] DEST_KEY = MetaData:Host REGEX = mymatchinghost1rex FORMAT = host::myhost1 [newhost2] DEST_KEY = MetaData:Host REGEX = mymatchinghost2rex FORMAT = host::myhost2 [set-index] DEST_KEY=_MetaData:Index REGEX= .+mymatchingrex.+ FORMAT=myindex WRITE_META=true     thanks for your help, kind regards, harald
How can I write a query like following?  index=my_app | eval userError="Error while fetching User" | eval addressError = "Did not find address of user" | stats count(userError) as totalUserErro... See more...
How can I write a query like following?  index=my_app | eval userError="Error while fetching User" | eval addressError = "Did not find address of user" | stats count(userError) as totalUserErrors, count(addressError) as totalAddressErrors Expected output:  Error while fetching User 50 Did not find address of user 30