All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to get cumulate response times for below endpoint.   Below is the query i tried. but need similar endpoints should be cumulated together instead of separate endpoint.  | stats values(pod) as ... See more...
How to get cumulate response times for below endpoint.   Below is the query i tried. but need similar endpoints should be cumulated together instead of separate endpoint.  | stats values(pod) as HOST count avg(ReqProcessTime) as Avg p90(ReqProcessTime) as "Percentile90" max(ReqProcessTime) as Max by endpointURI, servicename, ResponseCode  
Example: Mynameissachintendulkar .Except sachin I need to remove remaining all text .please help me with the query. Thanks in Advance
what is the different between these apps? https://splunkbase.splunk.com/apps/#/search/nmon/product/all 1- ITSI module for Nmon Metricator 10 Installs 2- Technical Addon for the Metricator appli... See more...
what is the different between these apps? https://splunkbase.splunk.com/apps/#/search/nmon/product/all 1- ITSI module for Nmon Metricator 10 Installs 2- Technical Addon for the Metricator application for Nmon 165 Installs 3- Support Addon for the Metricator application for Nmon 332 Installs 4- Metricator application for Nmon 412 Installs
Hi All, Need help in getting the right rex filter for the below _raw data.   2021-12-04T01:29:48.015524+00:00 USHCO-EXXON, ipsec-ike-down, 689, "IKE connection with peer 10.218.42.113 (routing-ins... See more...
Hi All, Need help in getting the right rex filter for the below _raw data.   2021-12-04T01:29:48.015524+00:00 USHCO-EXXON, ipsec-ike-down, 689, "IKE connection with peer 10.218.42.113 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:15.007722+00:00 USHCO-EXXON, ipsec-tunnel-down, 687, "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:15.007722+00:00 USHCO-EXXON, ipsec-ike-down, 686, "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:14.807814+00:00 USHCO-EXXON, ipsec-tunnel-down, 872, "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:14.807814+00:00 USHCO-EXXON, ipsec-ike-down, 871, "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB     Above is the raw data.   Requirement :  All the content within " " need to filtered. Example     "IKE connection with peer 10.218.42.113 (routing-instance EXXON-Control-VR) is up" "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up"     Above content to be filtered into Event_Log   | rex field=_raw "(?<Event_Log>[^"]+)"   But something am missing, its not capturing the data
Is it possible to hide a panel if the search returns 0 output ?  I have a search which normally returns a table, but with an empty search result there is a gray table like symbol. Is there anyway to... See more...
Is it possible to hide a panel if the search returns 0 output ?  I have a search which normally returns a table, but with an empty search result there is a gray table like symbol. Is there anyway to hide this? Or show something else? Just some text? 
Hello everyone I faced a problem with variable on Email templates. When trigger text with ${action.triggerTime} sent to Email address, I see only UTC time, but for my notify's need to GMT+3. Im trie... See more...
Hello everyone I faced a problem with variable on Email templates. When trigger text with ${action.triggerTime} sent to Email address, I see only UTC time, but for my notify's need to GMT+3. Im tried anything, but it not successed.  Maybee someone knows how to use this variable with non-UTC time?
I need to show a bar graph having error login count from different IPs over time. User wants  me to show the columns in red where the login count is => 6  For login count < 6 columns in green. How... See more...
I need to show a bar graph having error login count from different IPs over time. User wants  me to show the columns in red where the login count is => 6  For login count < 6 columns in green. How can I achieve this, kindly help.
The query is giving desired result of 3 host index=* | table host | stats count by host First few seconds it is showing correct options on the dashboard filter where the same query has been imp... See more...
The query is giving desired result of 3 host index=* | table host | stats count by host First few seconds it is showing correct options on the dashboard filter where the same query has been impemented. Then it is showing extra options as marked in the snip after few seconds of loading of whole dashboard. Please help me to resolve this.
Hi everyone, I have 3 indexers with this specification 1.7 TB  local ssd for HOT & WARM Buckets 1.7 TB local ssd for data models 27 TB  SAN Storage for  COLD buckets replication factor:  3 sear... See more...
Hi everyone, I have 3 indexers with this specification 1.7 TB  local ssd for HOT & WARM Buckets 1.7 TB local ssd for data models 27 TB  SAN Storage for  COLD buckets replication factor:  3 search factor:  2   Now I want to add 2 indexers which have local ssd just like other servers but for now I can't provide SAN Storage for COLD Buckets. So as I mentioned and because of my RF which is 3, my cold buckets spread exactly as my settings. My question is: Is there a problem to add my new indexers now and after a while add SAN Storage to them?
Hi, I'm trying to forward data into my Splunk indexer, but when I do a "./splunk list forward-server", it shows up under "Configured but inactive forwards". When I checked "splunkd.log", I see   A... See more...
Hi, I'm trying to forward data into my Splunk indexer, but when I do a "./splunk list forward-server", it shows up under "Configured but inactive forwards". When I checked "splunkd.log", I see   AutoLoadBalancedConnectionStrategy [1676 TcpOutEloop] - Cooked connection to ip=<indexer_ip>:9997 timed out <-- 5 times, every 20 seconds TcpOutputProc [1675 parsing] - The TCP output processer has paused the data flow. Forwarding to host_dest=<indexer_ip> inside output group default-autolb-group from host_src=<forwarder hostname> has been blocked for blocked_seconds=8100. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   My forwarder is version 8.2.3 (on Ubuntu 20.04), my indexer is version 7.1.4 (on RHEL 7.4). I'm using a non-root account on my forwarder, and every time I run splunk or access the monitored folders, I need to run the commands using "sudo". I have tried to telnet from the forwarder to the indexer. On port 8089, I immediately get a "connection refused" message. On port 9997, there is no immediate response, the prompt just seems to wait for a long time at "Trying <indexer IP>...". I don't expect the telnet connection to be successful as my forwarder is behind a firewall that is very strict about which ports are allowed. But port 9997 (as destination) should be allowed on the network-level firewall. I also tried to check my "ufw". But I don't think it's running. > sudo ufw status Status: inactive > sudo systemctl status ufw.service ... Active: active (exited) since Monday 2021-12-06; 2h 59min ago ... > sudo ufw show listening tcp: 22 <forwarder IP> 8089 * udp: ... On the indexer, I have "/opt/splunk/etc/system/local/inputs.conf":   [default] host = <indexer hostname> [splunktcp://9997] disabled = 0   What is wrong with my setup?
A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in a... See more...
A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in and the Overview data source tab is recognising it and the new events can also be seen using search. There does appear to be a change in the data format that may be responsible for data_last_time_seen not being able to update, however Clearing and running data sampling again had no effect. Refreshing also has no effect. Is there a way to "refresh" this field? Or any other approaches that can be taken? Thanks 
How can I get the "host" value extracted from a JSON event with "INDEXED_EXTRACTIONS = json" into the events host field? By default, this value ends up in the extracted_host value, and the following... See more...
How can I get the "host" value extracted from a JSON event with "INDEXED_EXTRACTIONS = json" into the events host field? By default, this value ends up in the extracted_host value, and the following INGEST_EVAL does not work: INGEST_EVAL = host:=extracted_host, extracted_host:=null() 
Hello,  When I search for sour type=Xxx for last 60 min window , I found millions of records, so I tried to export , those millions of records in CSV format , and export and tried to download , with... See more...
Hello,  When I search for sour type=Xxx for last 60 min window , I found millions of records, so I tried to export , those millions of records in CSV format , and export and tried to download , within 30 sec , the page is showing me blank like this , any  information please . What might be the issue , please sujjest me .     
Hello everyone, When I was trying to search source type=... Xxx and checked from date from 3 /09/2021 to 6 /07/2021 it's showing me millions of records. And again I searched for 1 may 2021 to 25 Jul... See more...
Hello everyone, When I was trying to search source type=... Xxx and checked from date from 3 /09/2021 to 6 /07/2021 it's showing me millions of records. And again I searched for 1 may 2021 to 25 July 2021 it's showing only 370 events , so  results are not  as we expected, it should be more. Please could you help me. Regarding this issue.   Thanks
Hello Splunk Community,  I have created a query to calculate the business date of the file which arrived to be loaded & the date/time it arrived, which outputs the following (dummy data used):  ... See more...
Hello Splunk Community,  I have created a query to calculate the business date of the file which arrived to be loaded & the date/time it arrived, which outputs the following (dummy data used):  File Business Date Arrival Date Arrival Time File A 22-11-2021 06-12-2021 6.51 File B 22-11-2021 06-12-2021 6.55 File B 22-11-2021 06-12-2021 6.56   I want to create a new column which highlights if a file (with the same business date) arrived more than once on the same day. So for example the output would look like so:  File Business Date Arrival Date Arrival Time Count  File A 22-11-2021 06-12-2021 6.51 1 File B 22-11-2021 06-12-2021 6.55 2 File B 22-11-2021 06-12-2021 6.56 2   Can anyone help improving my query to include this new column?  Thanks,  Zoe
I'm trying to backfill my summary index with 2 months worth of data with a report that gives results from the last minute. This is my report:        action.email.useNSSubject = 1 action.summary_in... See more...
I'm trying to backfill my summary index with 2 months worth of data with a report that gives results from the last minute. This is my report:        action.email.useNSSubject = 1 action.summary_index = 1 action.summary_index._type = event alert.track = 0 cron_schedule = */1 * * * * dispatch.earliest_time = -1m dispatch.latest_time = now display.events.fields = ["host","source","sourcetype","Price","ID","Date","Time"] display.general.type = statistics display.page.search.tab = statistics display.visualizations.show = 0 enableSched = 1 realtime_schedule = 0 request.ui_dispatch_app = myapp request.ui_dispatch_view = search schedule_priority = higher search = index="myindex" sourcetype="mysource" | append [ search index="myindex" sourcetype="mysource" earliest=-1mon@mon latest=@mon | stats avg(Price) as past_avg by ID ] | eventstats values(past_avg) as past_avg by ID | where Price > past_avg | stats values(*) as * by ID | table ID, Price, past_avg I tried to fill it using this command: splunk cmd python fill_summary_index.py -app Myapp -name "Summary_Population" -et -2mon@mon -lt @mon -dedup true but I get this error: *** For saved search 'Summary_Populating' *** Failed to get list of scheduled times for saved search 'Summary_Populating' (app = 'Myapp', error = '[HTTP 404] https://127.0.0.1:8089/servicesNS/Myusername/Myapp/saved/searches/Summary_Populating/scheduled_times?earliest_time=-mon%40mon&latest_time=%40now; [{'type': 'ERROR', 'code': None, 'text': 'Action forbidden.'}]' No searches to run Does anyone have any idea why is this occurring and how to fix it? 
Hi, I've been reading number of posts about how to extract the OS and browser details but I don't think there is a better or clean way to do this. I've a similar requirement where in my logs there i... See more...
Hi, I've been reading number of posts about how to extract the OS and browser details but I don't think there is a better or clean way to do this. I've a similar requirement where in my logs there is a user agent field. Now what I want is to know the browser details along with device like if it's a desktop, mobile etc. Just posting this to see if anyone has figured out anything on this which can save time writing complex SPLs? Any help will be appreciated. 
Hi Team, The issue is like we have duplicate in the summary index, where from single host multiple same records are available. Looking for a very quick help. I would require "query to remove duplic... See more...
Hi Team, The issue is like we have duplicate in the summary index, where from single host multiple same records are available. Looking for a very quick help. I would require "query to remove duplicates from the summary index" Thanks in advance. Regards, Prakash Mohan Doss  
We are using Splunk 7.2.6  as our Syslog server in our network environment. On the Splunk server, I added the IPFIX add-on, and on the NSX-T point the IPFIX target to the SplunkSrv:4739. We use the... See more...
We are using Splunk 7.2.6  as our Syslog server in our network environment. On the Splunk server, I added the IPFIX add-on, and on the NSX-T point the IPFIX target to the SplunkSrv:4739. We use the exact same configuration on our previous NSX-V and we were able to receive all the related packets on Splunk, but in NSX-T we are unable to do so. We check everything, local firewall rules on Splunk included, but still no chance. also, I check the Splunk server with Wireshark and I got IPFIX traffic but it doesn't show in Splunk Any thoughts?
At the beginning I want to say that I did search the forums and I saw the most typical responses like "use logrotate". Sorry, that's not applicable in this case. And the case is: I have a windows-b... See more...
At the beginning I want to say that I did search the forums and I saw the most typical responses like "use logrotate". Sorry, that's not applicable in this case. And the case is: I have a windows-based UF which is supposed to ingest files from several servers shared over CIFS shares. During typical opration it seems to work quite well since we increased the limits on UF, especially those pertaining to number of open file descriptors. But we have issues whenever we have to restart the UF (which mostly happens when we add new sources). Then the files get re-checked one after another and even though we have limits for "ignore_older_than" and so on, and the crc's are saled with the filename so effectively the events aren't re-ingested multiple times, UF opens each file one after another and checks its contents and it takes up to few hours after each UF restart which is kinda annoying. Any hints at optimizing that? Unfortunately, we're quite limited on the source side since we're not ablle to either install the UF's at the log-producing machines (which of course would at least to some extend alleviate the problem) or move the logs away from the source location. So effectively we're stuck with this setup. Might there be some issue with windows sharing so that even though the fishbucket seems to be working properly, the UF still has to scan through whole file from the beginning? That would explain the delay.