All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage... See more...
Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage" splunk_server=* earliest=-2month@d | eval Date=strftime(_time, "%Y/%m/%d") | eventstats sum(b) as volume by idx, Date | eval MB=round(volume/1024/1024,5) | timechart first(MB) AS volume by idx
hi, We need to configure the TA-elasticsearch-data-integrator---modular-input app and we receive data. The problem is : we do receive data, but not all... here is the app conf: Name ALogName ... See more...
hi, We need to configure the TA-elasticsearch-data-integrator---modular-input app and we receive data. The problem is : we do receive data, but not all... here is the app conf: Name ALogName Intervalle 3600 Index MyIndex Statut Activated Elasticsearch instance URL: MyName Port #: MyPort Use SSL 1 Verify Certs 1 CA Certs Path: /my/ca.pem User: MyUser Secret / Password: MyPassword Elasticsearch Indice: MyIndice Elasticsearch Date field name: @timestamp Time Preset: 30d Custom Source Type: json If i use CLI, with the exact same configuration, except i use match, I receive the good datas. curl -u "MyUser:MyPassword" -k "https://MyName:MyPort/MyIndice/_search?&scroll=1m&size=1000" -H 'Content-Type: application/json' -d'{"query": {"match": {"message": "MyMessage"}}, "sort": { "@timestamp": "desc" }}' {"_scroll_id":"[...]","took":695,"timed_out":false,"_shards":{"total":8,"successful":8,"skipped":0,"failed":0},"hits":{"total":{"value":3,"relation":"eq"},"max_score":null,"hits":[...MyData...] here is the logs of the app: 2021-12-06 13:29:00,073 INFO pid=26584 tid=MainThread file=base.py:log_request_success:271 | POST https://MyName:MyPort/MyIndice/_search?scroll=2m&size=1000 [status:200 request:0.870s] 2021-12-06 13:37:12,701 WARNING pid=26584 tid=MainThread file=base.py:log_request_fail:299 | POST https://MyName:MyPort/_search/scroll [status:404 request:0.076s] 2021-12-06 13:37:12,703 INFO pid=26584 tid=MainThread file=base.py:log_request_success:271 | DELETE https://MyName:MyPort/_search/scroll [status:404 request:0.002s] 2021-12-06 13:37:12,705 ERROR pid=26584 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 104, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 109, in collect_events for doc in res: File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/helpers/actions.py", line 589, in scan body={"scroll_id": scroll_id, "scroll": scroll}, **scroll_kwargs File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/client/utils.py", line 168, in _wrapped return func(*args, params=params, headers=headers, **kwargs) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/client/__init__.py", line 1513, in scroll "POST", "/_search/scroll", params=params, headers=headers, body=body File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/transport.py", line 415, in perform_request raise e File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/transport.py", line 388, in perform_request timeout=timeout, File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/connection/http_urllib3.py", line 275, in perform_request self._raise_error(response.status, raw_data) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/connection/base.py", line 331, in _raise_error status_code, error_message, additional_info elasticsearch.exceptions.NotFoundError: NotFoundError(404, 'search_phase_execution_exception', 'No search context found for id [9884105]') Any help would be great, thanks!
We're trying to configure HTTPS API feed  that will push logs from Zscaler Cloud service into an HTTPS API-based log collector on the SIEM using trail Splunk Cloud platform. Please advice    Thanks... See more...
We're trying to configure HTTPS API feed  that will push logs from Zscaler Cloud service into an HTTPS API-based log collector on the SIEM using trail Splunk Cloud platform. Please advice    Thanks Asif
Hi,   we like to fetch application logs from a windows server  which are stored Windows Event Store in windows-application. Windows Application Event Log Name : MSSQL_xxx  Windows Event Log Sourc... See more...
Hi,   we like to fetch application logs from a windows server  which are stored Windows Event Store in windows-application. Windows Application Event Log Name : MSSQL_xxx  Windows Event Log Source : XYX I found: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorWindowseventlogdata#Specify_global_settings_for_Windows_Event_Log_inputs But is there a possibility to filter in the input-stanza like this [WinEventLog://Application/MSSQL_xxx] include = XYX renderXml=1 sourcetype=XmlWinEventLog   Thanks.
Hi Team, I want to monitor my Unix Server CPU usage . If the CPU usage exceeds 90% then needs to send alert mail . Can you please help me in doing this?    
Hi, I am looking for a way to track when a new Splunk Forwarder connects along with the version. Was hoping to find some relevant field on Deployment Server (/services/deployment/server/clients) but... See more...
Hi, I am looking for a way to track when a new Splunk Forwarder connects along with the version. Was hoping to find some relevant field on Deployment Server (/services/deployment/server/clients) but I could only see lastPhoneHomeTime, nothing for when it first connected to the system. Is it possible to get this information, either from Deployment Server or Forwarder's internal logs?   Thanks, ~ Abhi  
Hi all,   someone in our environment made a field extraction and used it in a dashboard. Other users want to see these fields too, so as the admin I made the specific field extraction permissions g... See more...
Hi all,   someone in our environment made a field extraction and used it in a dashboard. Other users want to see these fields too, so as the admin I made the specific field extraction permissions global. However, the other users still don't see it. When I search the backend I see that the transforms is saved in the user folder instead of the app folder. Why does this happen and what is the solution to making the extraction available to everyone anyway, besides just copying it to an app folder? 
We have a requirement to setup ping and nslookup for hosts in different network zones and index the data into Splunk.  Planning to use the ping and nslookup search commands that comes with Netwo... See more...
We have a requirement to setup ping and nslookup for hosts in different network zones and index the data into Splunk.  Planning to use the ping and nslookup search commands that comes with Network Toolkit TA used along with Splunk map command as below. | inputlookup hostlist.csv| fields host | map search=“| ping dest=“$host$” index=index name” max searches=50000 do a similar one for nslookup as well instead of ping.   The will be setup as a schedule saved search to run every 5 mins. The hosts can be maximum around 50000.  Do you think we can use map for this scale? If not what is the optimal rows (search running every 5 mins) be used with map command for this to work efficiently?  Are there any other better options to setup this in Splunk?
I'm pulling events from remote computers using WMI as described in the splunk docs. Everything seems to be going quite well except... sometimes I encounter something like that in my logs: Failed to ... See more...
I'm pulling events from remote computers using WMI as described in the splunk docs. Everything seems to be going quite well except... sometimes I encounter something like that in my logs: Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Mon Dec 6 12:22:22 2021). Context: source=WMI:WinEventLog:Application|host=<redacted>|WMI:WinEventLog:Application|1 Which is quite surprising since I thought that WMI-pulled events should have proper timestamp created from the event timestamp on the source machine. Anyone encountered such issue?
How to get cumulate response times for below endpoint.   Below is the query i tried. but need similar endpoints should be cumulated together instead of separate endpoint.  | stats values(pod) as ... See more...
How to get cumulate response times for below endpoint.   Below is the query i tried. but need similar endpoints should be cumulated together instead of separate endpoint.  | stats values(pod) as HOST count avg(ReqProcessTime) as Avg p90(ReqProcessTime) as "Percentile90" max(ReqProcessTime) as Max by endpointURI, servicename, ResponseCode  
Example: Mynameissachintendulkar .Except sachin I need to remove remaining all text .please help me with the query. Thanks in Advance
what is the different between these apps? https://splunkbase.splunk.com/apps/#/search/nmon/product/all 1- ITSI module for Nmon Metricator 10 Installs 2- Technical Addon for the Metricator appli... See more...
what is the different between these apps? https://splunkbase.splunk.com/apps/#/search/nmon/product/all 1- ITSI module for Nmon Metricator 10 Installs 2- Technical Addon for the Metricator application for Nmon 165 Installs 3- Support Addon for the Metricator application for Nmon 332 Installs 4- Metricator application for Nmon 412 Installs
Hi All, Need help in getting the right rex filter for the below _raw data.   2021-12-04T01:29:48.015524+00:00 USHCO-EXXON, ipsec-ike-down, 689, "IKE connection with peer 10.218.42.113 (routing-ins... See more...
Hi All, Need help in getting the right rex filter for the below _raw data.   2021-12-04T01:29:48.015524+00:00 USHCO-EXXON, ipsec-ike-down, 689, "IKE connection with peer 10.218.42.113 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:15.007722+00:00 USHCO-EXXON, ipsec-tunnel-down, 687, "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:15.007722+00:00 USHCO-EXXON, ipsec-ike-down, 686, "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:14.807814+00:00 USHCO-EXXON, ipsec-tunnel-down, 872, "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB 2021-12-04T01:29:14.807814+00:00 USHCO-EXXON, ipsec-ike-down, 871, "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up", USPAB     Above is the raw data.   Requirement :  All the content within " " need to filtered. Example     "IKE connection with peer 10.218.42.113 (routing-instance EXXON-Control-VR) is up" "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IPSEC tunnel with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up" "IKE connection with peer 10.218.42.111 (routing-instance EXXON-Control-VR) is up"     Above content to be filtered into Event_Log   | rex field=_raw "(?<Event_Log>[^"]+)"   But something am missing, its not capturing the data
Is it possible to hide a panel if the search returns 0 output ?  I have a search which normally returns a table, but with an empty search result there is a gray table like symbol. Is there anyway to... See more...
Is it possible to hide a panel if the search returns 0 output ?  I have a search which normally returns a table, but with an empty search result there is a gray table like symbol. Is there anyway to hide this? Or show something else? Just some text? 
Hello everyone I faced a problem with variable on Email templates. When trigger text with ${action.triggerTime} sent to Email address, I see only UTC time, but for my notify's need to GMT+3. Im trie... See more...
Hello everyone I faced a problem with variable on Email templates. When trigger text with ${action.triggerTime} sent to Email address, I see only UTC time, but for my notify's need to GMT+3. Im tried anything, but it not successed.  Maybee someone knows how to use this variable with non-UTC time?
I need to show a bar graph having error login count from different IPs over time. User wants  me to show the columns in red where the login count is => 6  For login count < 6 columns in green. How... See more...
I need to show a bar graph having error login count from different IPs over time. User wants  me to show the columns in red where the login count is => 6  For login count < 6 columns in green. How can I achieve this, kindly help.
The query is giving desired result of 3 host index=* | table host | stats count by host First few seconds it is showing correct options on the dashboard filter where the same query has been imp... See more...
The query is giving desired result of 3 host index=* | table host | stats count by host First few seconds it is showing correct options on the dashboard filter where the same query has been impemented. Then it is showing extra options as marked in the snip after few seconds of loading of whole dashboard. Please help me to resolve this.
Hi everyone, I have 3 indexers with this specification 1.7 TB  local ssd for HOT & WARM Buckets 1.7 TB local ssd for data models 27 TB  SAN Storage for  COLD buckets replication factor:  3 sear... See more...
Hi everyone, I have 3 indexers with this specification 1.7 TB  local ssd for HOT & WARM Buckets 1.7 TB local ssd for data models 27 TB  SAN Storage for  COLD buckets replication factor:  3 search factor:  2   Now I want to add 2 indexers which have local ssd just like other servers but for now I can't provide SAN Storage for COLD Buckets. So as I mentioned and because of my RF which is 3, my cold buckets spread exactly as my settings. My question is: Is there a problem to add my new indexers now and after a while add SAN Storage to them?
Hi, I'm trying to forward data into my Splunk indexer, but when I do a "./splunk list forward-server", it shows up under "Configured but inactive forwards". When I checked "splunkd.log", I see   A... See more...
Hi, I'm trying to forward data into my Splunk indexer, but when I do a "./splunk list forward-server", it shows up under "Configured but inactive forwards". When I checked "splunkd.log", I see   AutoLoadBalancedConnectionStrategy [1676 TcpOutEloop] - Cooked connection to ip=<indexer_ip>:9997 timed out <-- 5 times, every 20 seconds TcpOutputProc [1675 parsing] - The TCP output processer has paused the data flow. Forwarding to host_dest=<indexer_ip> inside output group default-autolb-group from host_src=<forwarder hostname> has been blocked for blocked_seconds=8100. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   My forwarder is version 8.2.3 (on Ubuntu 20.04), my indexer is version 7.1.4 (on RHEL 7.4). I'm using a non-root account on my forwarder, and every time I run splunk or access the monitored folders, I need to run the commands using "sudo". I have tried to telnet from the forwarder to the indexer. On port 8089, I immediately get a "connection refused" message. On port 9997, there is no immediate response, the prompt just seems to wait for a long time at "Trying <indexer IP>...". I don't expect the telnet connection to be successful as my forwarder is behind a firewall that is very strict about which ports are allowed. But port 9997 (as destination) should be allowed on the network-level firewall. I also tried to check my "ufw". But I don't think it's running. > sudo ufw status Status: inactive > sudo systemctl status ufw.service ... Active: active (exited) since Monday 2021-12-06; 2h 59min ago ... > sudo ufw show listening tcp: 22 <forwarder IP> 8089 * udp: ... On the indexer, I have "/opt/splunk/etc/system/local/inputs.conf":   [default] host = <indexer hostname> [splunktcp://9997] disabled = 0   What is wrong with my setup?
A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in a... See more...
A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in and the Overview data source tab is recognising it and the new events can also be seen using search. There does appear to be a change in the data format that may be responsible for data_last_time_seen not being able to update, however Clearing and running data sampling again had no effect. Refreshing also has no effect. Is there a way to "refresh" this field? Or any other approaches that can be taken? Thanks