All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After I run my query, I am unable to see the logs it pulls under events. I can't see them using the raw, list or table options. I used to be able to see them but can't know. I"m experiencing this ... See more...
After I run my query, I am unable to see the logs it pulls under events. I can't see them using the raw, list or table options. I used to be able to see them but can't know. I"m experiencing this both in Chrome and MSIE browsers. Does anyone know what is going on?
i have a critical question . when i push apps and addon in both shcluster and indexer cluster, for the indexer cluster we use the master node. the question is the add-on is downloaded from splunkbase... See more...
i have a critical question . when i push apps and addon in both shcluster and indexer cluster, for the indexer cluster we use the master node. the question is the add-on is downloaded from splunkbase as a winrar file , can i use it immediatley or i must extract it first ? and if i must do so , how can i ??
Let's say my dashboard has two panels with timechart displays. As I hover over panel1, I'd like for panel2 to show an indication (possibly via a vertical line) of the time in panel1 where my sprite ... See more...
Let's say my dashboard has two panels with timechart displays. As I hover over panel1, I'd like for panel2 to show an indication (possibly via a vertical line) of the time in panel1 where my sprite is currently. This makes it easier to correlate across multiple panels, and can help when analyzing the charts.
We have discovered that on one of our servers, we had an error in the monitoring stanza and was not getting the logs for several directories. We can go back and get those logs from the backups. Th... See more...
We have discovered that on one of our servers, we had an error in the monitoring stanza and was not getting the logs for several directories. We can go back and get those logs from the backups. These logs would be restored to a new temp folder, I think it would be something like /datatemp/. How would I set it up to pull this logs in, but without the /datatemp/ in the source ?
I am preparing to migrate my Splunk data storage to AWS S3 using Smart Store. My S3 buckets will be replicated across regions in AWS for failover and I have a requirement to fully test that capabilit... See more...
I am preparing to migrate my Splunk data storage to AWS S3 using Smart Store. My S3 buckets will be replicated across regions in AWS for failover and I have a requirement to fully test that capability. In theory, all of the data in my cluster should replicate to my failover region and I should be able to simply point my new Splunk instances to it and go. Has anyone ever tested this out and have insights into what will go wrong?
I have configured ES to download the list of free webmail-hosting domains below as an intelligence download (Data inputs -> Intelligence Downloads). I don't want to trigger Threat Activity results ba... See more...
I have configured ES to download the list of free webmail-hosting domains below as an intelligence download (Data inputs -> Intelligence Downloads). I don't want to trigger Threat Activity results based on these domains since they include common services like outlook.com, gmail.com, yahoo, etc., so I unchecked the Is Threat Intelligence checkbox when creating the file. It has successfully downloaded the file to splunk/var/lib/splunk/modinputs/threatlist/filename.txt , but I am at a loss for how to get it into a CSV for use in search. I tried to create a lookup definition in the GUI, but I presume that dialog is only able to see CSVs which are in the /lookups directories for various apps. Does anyone have any suggestions for using my new intelligence file as a lookup? Thanks! hxxps://gist.githubusercontent.com/tbrianjones/5992856/raw/93213efb652749e226e69884d6c048e595c1280a/free_email_provider_domains.txt
Hi I am newbie to splunk and trying to build a search query that can parse a specific text like below to get the sum of the AAA file content length but couldn't figure out on the search query for tha... See more...
Hi I am newbie to splunk and trying to build a search query that can parse a specific text like below to get the sum of the AAA file content length but couldn't figure out on the search query for that. Any helps will be appreciated. Thanks. AAA file content length is 67095 bytes AAA file content length is 7095 bytes
I let our license elapse for a few days before I applied the new one. Now it says "Your Splunk Light license expired or you have exceeded your license limit too many times". Judging from the response... See more...
I let our license elapse for a few days before I applied the new one. Now it says "Your Splunk Light license expired or you have exceeded your license limit too many times". Judging from the responses here, I need to contact support to get a reset license. I cannot figure out how to contact support without an entitlement. Surely I can fix this without purchasing an entitlement. Am I missing something obvious?
I have tried version 2.5 and 2.6 of the Add-On on both 7.0 and 7.3 versions of Splunk (2 separate servers) and I receive the same error. Has anyone resolved similar?
Hi all, I've been struggling with a good query for this for a few days. Basically I'm trying to track users that drop off between pages in a guided web application. I'm able to get the results ... See more...
Hi all, I've been struggling with a good query for this for a few days. Basically I'm trying to track users that drop off between pages in a guided web application. I'm able to get the results for Page1 and for Page2 individually, but I don't know how to combine the two queries to get the desired result. I don't know if I need to work with join or distinct count. Basically on page1 I can dedup the AccoutNum, UserID (I don't care if the same user comes through with the same account), but I do care if a different user does. Users B and F both came to page one with account 567, but only F proceeded. I really want to learn so any guided help or explanation would be amazing. Please let me know if anything is unclear.
Is there a way to identify when we are getting close to the concurrency limits? we know that there are error messages when the limit is being hit but we would like to know how close and based on that... See more...
Is there a way to identify when we are getting close to the concurrency limits? we know that there are error messages when the limit is being hit but we would like to know how close and based on that we can make changes to the srchJobsQuota and maybe cumulativeSrchJobsQuota .
i have a output where i have 0 in random columns. i would like these 0's to be replaced with any text for reporting... is it possible to replace 0 in any field ? ex output below Jan2019 Feb20... See more...
i have a output where i have 0 in random columns. i would like these 0's to be replaced with any text for reporting... is it possible to replace 0 in any field ? ex output below Jan2019 Feb2019 Mar2019 Apr2019 0 1 8 10 1 20 3 40 9 1 0 4 the above should change to Jan2019 Feb2019 Mar2019 Apr2019 NA 1 8 10 1 20 3 40 9 1 NA 4 there are also 0's in other cells but they are not =0, but 10 ,40 , 20 etc... but only =0 should be replaced, thanks for the response in advance..
We have a support ticket open, but I thought I'd also ask the community. Since upgrading our Splunk to 8.0.1 this one HF has been spewing "TcpOutputProc - Possible duplication of events " for most ch... See more...
We have a support ticket open, but I thought I'd also ask the community. Since upgrading our Splunk to 8.0.1 this one HF has been spewing "TcpOutputProc - Possible duplication of events " for most channels. As well as "TcpOutputProc - Applying quarantine to ip=xx.xx.xx.xx port=9998 _numberOfFailures=2" We upgraded on the 15th near midnight. This is a count of those the errors from that host. 2020-02-14 0 2020-02-15 623 2020-02-16 923874 2020-02-17 396920 2020-02-18 678568 2020-02-19 602100 2020-02-20 459284 2020-02-21 1177642 Here is a count from the indexer cluster showing the number of blocked=true events. One would expect these to be similar in count if the indexers were telling the HF to go elsewhere because it's queues were full. index=_internal host=INDEXERNAMES sourcetype=splunkd source=/opt/splunk/var/log/splunk/metrics.log blocked=true component=Metrics | timechart span=1d count by source 2020-02-14 7 2020-02-15 180 2020-02-16 260 2020-02-17 15 2020-02-18 18 2020-02-19 2415 2020-02-20 1 2020-02-21 2 Lastly, it's not just one source or channel, it's everything from the host. index=_internal component=TcpOutputProc host=ghdsplfwd01lps log_level=WARN duplication | rex field=event_message "channel=source::(?[^|]+)" | stats count by channel /opt/splunk/var/log/introspection/disk_objects.log 51395 /opt/splunk/var/log/introspection/resource_usage.log 45470 mule-prod-analytics 42192 /opt/splunk/var/log/splunk/metrics.log 28283 web_ping://PROD_CommerceHub 27881 web_ping://V8_PROD_CustomSolr5 27877 web_ping://V8_PROD_WebServer4 27873 web_ping://EnterWorks PRD 27871 web_ping://RTP DEV 27870 web_ping://Ensighten 27869 web_ping://RTP 27867 bandwidth 20570 cpu 19949 iostat 19946 ps 19821 Any ideas?
Hello Splunker! I added the "tostring + commas" to a number to get the thousand separator. Work's fine. The problem is when I do the rex command to replace the commas with a space to match th... See more...
Hello Splunker! I added the "tostring + commas" to a number to get the thousand separator. Work's fine. The problem is when I do the rex command to replace the commas with a space to match the canadian format number, my numbers get shifted on the left side. Is there another way to do so, so I can keep my number on the right side? See code below: | eval sum_totrows=tostring(sum_totalrows,"commas") | rex field=sum_totrows mode=sed "s/,/ /g" Result: 407 930 119 14 131 ... Thanks!
Hello, I am trying to display at search time only the content of the "log" field - where the application data is. I am using the stanza below on the SH cheers, [source::http:k8s_test] KV... See more...
Hello, I am trying to display at search time only the content of the "log" field - where the application data is. I am using the stanza below on the SH cheers, [source::http:k8s_test] KV_MODE = json EVAL-_raw = log _raw event { [-] K8Cluster: k8s-cluster-aa-bb-01 docker: { [-] container_id: 919d689b4ee5aa0ac2ad7ac3333557b4bb7471da313ac9c7e6cbfc9c9e925e8a } kubernetes: { [+] } log: [2020/02/28 16:40:41] [error] [out_fw] no upstream connections available stream: stderr } output [2020/02/28 16:30:18] [error] [out_fw] no upstream connections available [2020/02/28 16:30:18] [error] [out_fw] no upstream connections available
Hi All, I am looking to implement Log Analytics and had a look through (https://www.appdynamics.com/product/how-it-works/application-analytics/log-analytics) but doesnt seem to be fully giving me th... See more...
Hi All, I am looking to implement Log Analytics and had a look through (https://www.appdynamics.com/product/how-it-works/application-analytics/log-analytics) but doesnt seem to be fully giving me the information I require. Does anybody have any other links that would give a more indepth insight into Log Analytics or videos? I presume people have had a few positive results with this too? Are you able to analyse any type of log file for example a plain .log file? or is there a restriction of files that you can check?
We have nine sites in a multi-site cluster with indexers at each site ranging from three to 15 servers. Each site's indexers are all on the same vlan and ip subnet for for their region. I have a need... See more...
We have nine sites in a multi-site cluster with indexers at each site ranging from three to 15 servers. Each site's indexers are all on the same vlan and ip subnet for for their region. I have a need to expand one of the sites with more indexers but the vlan has run out of IP addresses. Is it possible to just create a new vlan to use a different IP subnet range and add these new indexers to the previously configured site? For example site2's indexers are in vlan 2 on ip subnet range 10.1.1.0/24. Can I add 6 new indexers to site2 with those new servers in vlan 3 on ip subnet range 11.1.1.0/24? I looked over the documentation and didn't see a requirement indexers in a site for a multi-site cluster need to all be on the same vlan/ip subnet range but wanted to check if this is a legit configuration from real users in the community. Any pro's and con's? We currently have two independent search heads but are going to a search head cluster later this year. thank you
Greetings all. I have this: | stats dc(Indexer) AS conntected_indexers values(Indexer) as Connected by connectType sourceIp sourceHost Ver I have a list of indexers (ind1, ind2, ind3) th... See more...
Greetings all. I have this: | stats dc(Indexer) AS conntected_indexers values(Indexer) as Connected by connectType sourceIp sourceHost Ver I have a list of indexers (ind1, ind2, ind3) that if they show up in values(indexer), I want to filter that entire line out of my report. How would I do that? Thanks!
Splunk seems to have installed on Ubuntu 18.04 but the only place i see it is in the /opt dir and if i try to cd to it it says no such directory exists but if i cat it it says it is a directory , any... See more...
Splunk seems to have installed on Ubuntu 18.04 but the only place i see it is in the /opt dir and if i try to cd to it it says no such directory exists but if i cat it it says it is a directory , any help ? i am just trying to get to know the product because we use it at work
HI All , I am ingesting cloudwatch logs through s3->sns->sqs , on heavy forwarder using the aws add on using sqs based s3 as input type . The logs in the bucket are in .gz format and when splun... See more...
HI All , I am ingesting cloudwatch logs through s3->sns->sqs , on heavy forwarder using the aws add on using sqs based s3 as input type . The logs in the bucket are in .gz format and when splunk ingest the logs and when i search for it , the logs are in not readable format .(all in special characters like 3÷/Í(2+½·ÔL3¥ÂïÅ^½}Ô ) . Could you please guide of how i can solve this . i have ingested other logs in gz format and those can be read .