All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, i have the following error on my cluster masters (XXXXA13) web gui. Search peer XXXXP13 has the following message: The TCP output processor has paused the data flow. Forwarding to host_des... See more...
Hello, i have the following error on my cluster masters (XXXXA13) web gui. Search peer XXXXP13 has the following message: The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXP13 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data On the deployment server XXXXP13 i have the following error message on the web gui.  The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXXP13 has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data If i then have a look at splunkd on the deployment server i have the following errors  IndexerDiscoveryHeartbeatThread - Error in Indexer Discovery communication. Verify that the pass4SymmKey set under [indexer_discovery:group1] in 'outputs.conf' matches the same setting under [indexer_discovery] in 'server.conf' on the Cluster Master. [uri=https://XXXXXA13:8089/services/indexer_discovery http_code=502 http_response="Unauthorized"] WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest= inside output group group1 from host_src=XXXXXP13 has been blocked for blocked_seconds=158470. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. Any help is greatly appreciated. This only happened after upgrading to 8.4.1
Hi! I'm new to splunk, I'm just learning it now because I need to understand the splunk search string given to me by the client (i'm an auditor) So I have a few questions with this splunk search str... See more...
Hi! I'm new to splunk, I'm just learning it now because I need to understand the splunk search string given to me by the client (i'm an auditor) So I have a few questions with this splunk search string, I hope you could maybe help me 1. what does "index=*", in the index=* sourcetype=WinEventLog:Application OR source=WinEventLog:Application  SourceName="Application Name". And also I understand that in the source type, it is how the splunk will read the data (is that right?), I was also wondering why there is no "OR" before the SourceName? 2. Also if I fetch data from the event logging of the application, do I get the "Message" of that application? 3. How do I know which host the data is from? I did my research but I can't fully grasp the concept just yet. Thank you!
Hello, In my indexer i have old data in hot buckets why can any once help me I don't want this old data in hot buckets.  
Hi , I have a question , Currently i am using my deployment server and the heavy forwarder ( Hosted HEC event collector ) as non secure and HTTPS is not enabled on these two servers. i want to chang... See more...
Hi , I have a question , Currently i am using my deployment server and the heavy forwarder ( Hosted HEC event collector ) as non secure and HTTPS is not enabled on these two servers. i want to change that to use HTTPS instead of HTTP on deployment server and the heavy forwarder. I am wondering if i change that what will be the impact of it ?  1: Is it will impact the clients currently phoning home if i enabled the SSL on the server ? 2: In the HF , I am using http and its also non SSL enabled , if i start using HTTPS and enabled SSL will it fail the HEC token ingestion done through the http protocol ?    Thanks
Hi Team, I want to monitor individual CPU and RAM of the worker processes which I get when I run  C:\Windows\System32\inetsrv>appcmd list wps   Please let me know how to onboard the CPU and RAM m... See more...
Hi Team, I want to monitor individual CPU and RAM of the worker processes which I get when I run  C:\Windows\System32\inetsrv>appcmd list wps   Please let me know how to onboard the CPU and RAM metrics of these worker processes.   Regards, Vedhajanani
Hello I'm new to splunk and was wondering if there is a way where the values on the y axis can be non numeric. I'm trying to create a chart that maps classroom grades so y axis needs to be abcdf. I... See more...
Hello I'm new to splunk and was wondering if there is a way where the values on the y axis can be non numeric. I'm trying to create a chart that maps classroom grades so y axis needs to be abcdf. I was able to translate the grade to a number and then chart it but I was curious to see if there was a possible alternative
I have a curl response which is json string[], I am able to fetch the data using split(), mvexpand() and then substring. But the problem with substring is, if the sequence of value for a key changes ... See more...
I have a curl response which is json string[], I am able to fetch the data using split(), mvexpand() and then substring. But the problem with substring is, if the sequence of value for a key changes then result is not correct. Tried mvindex on the manipulated data but it doesn't work or rex, not sure if I am doing any thing wrong. Below is my log and I am interested in "Response []" from which I want to pull userId i.e. 9401850890, 9801850840, 9801850841. APIName=TestApi, HTTP_STATUS=200, totalTime=2346, Response=[{"id":11168715,"state":"Open","title":"TESTS NOTIFICATION - SPIKE IN USEAGE userId : 9401850890"},{"id":11168716,"state":"Closed","title":"TESTS NOTIFICATION - SPIKE IN USEAGE userId : 9801850840"},{"id":21172617,"state":"Verify","title":"TESTS NOTIFICATION - SPIKE IN USEAGE userId : 9801850841"}] Query that I tried : index=api_stats source=apistats earliest=-10m@m latest=-0m@m | eval x=ltrim(Response,“[”) | eval n=rtrim(x, “]”) | eval temp=split(n,“}”)| mvexpand temp| eval y=ltrim(temp,“,”) | eval testData=mvindex(y,-1) | eval testId=substr(testData, 7, 9)| eval apiCallerID=substr(testData, 92, 10) | table testData,testId,apiCallerID
Hello, I have 2 indexers and 2 sites I want all 4 indexers to have a searchable copy of the buckets and replicated. Had PS they did this site_replication_factor = origin:1,total:4 site_search_fac... See more...
Hello, I have 2 indexers and 2 sites I want all 4 indexers to have a searchable copy of the buckets and replicated. Had PS they did this site_replication_factor = origin:1,total:4 site_search_factor = origin:1,total:4 From what I read and the cluster course this is wrong it will work but is not the way to do it. I think it should be site_replication_factor = origin:2,total:4 site_search_factor = origin:2,total:4 Any suggestions? Cheers
Hi , I have a data from search in below format Name       provider1IN                 provider1OUT               provider2IN             provider2OUT ABC               13:00                        ... See more...
Hi , I have a data from search in below format Name       provider1IN                 provider1OUT               provider2IN             provider2OUT ABC               13:00                             14:00                            15:00                                 16:00                          17:00                             18:00                            19:00                                 20:00 BCD                                                                                                  21:00                                22:00                                                                                                            23:00                                 23:30 here, for ABC, Intime at provider 1 is at 13:00 and 17:00 hours and  out time is 14:00 and 18:00 hours. Similary at provider 2, intime is 15:00 and 19:00 hours and out time is 16:00 and 20:00 hours For BCD, provider1 intime and outtime are null values and only provider2 intime and outtime has value as shown in above table. Requirement : I need to calculate total time spent by ABC and BCD in provider1 and provider 2 which means what I want to achieve is ABC                       provider1time                                            provider2time                                 (14:00-13:00)+(18:00-17:00)             (16:00-15:00)+(20:00-19:00) BCD                                      0                                                       (22:00-21:00)+(23:30-23:00) Kindly help and suggest how can I achieve the above result  I am using stats list function to retrieve the multivalue intime and outtime. Thanks for the help in advance.
I am trying to a build a dashboard with a table view similar to the one attached. I have searched on the forum but have not found any solutions yet. Any help is appreciated.
Hi, i have installed Ivanti ISM Add-On but the connection doesn't work. The log file says   2020-07-15 11:00:33,680 INFO pid=23511 tid=MainThread file=setup_util.py:log_info:117 | Log level is no... See more...
Hi, i have installed Ivanti ISM Add-On but the connection doesn't work. The log file says   2020-07-15 11:00:33,680 INFO pid=23511 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2020-07-15 11:00:33,680 INFO pid=23511 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2020-07-15 11:00:33,681 INFO pid=23511 tid=MainThread file=base_modinput.py:log_info:295 | {"tenant": "XXXXXXX/login.aspx", "username": "XXX", "password": "XXX", "role": "Service Desk Analyst"} 2020-07-15 11:00:33,698 ERROR pid=23511 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 376, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 994, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connection.py", line 394, in connect ssl_context=context, File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/adapters.py", line 449, in send timeout=timeout File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='XXXXXXX', port=443): Max retries exceeded with url: /login.aspx/api/rest/authentication/login (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ism_service_requests_input.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/input_module_ism_service_requests_input.py", line 44, in collect_events auth_token = ism.authenticate(base_url=opt_tenant,username=opt_username,password=opt_password,role=opt_role, api_key=opt_api_key, helper=helper, verify=opt_verify) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ism.py", line 91, in authenticate response = requests.post(base_url + '/api/rest/authentication/login',data=json.dumps(payload),headers=headers,verify=verify) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/api.py", line 116, in post return request('post', url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='XXXXXXXX', port=443): Max retries exceeded with url: /login.aspx/api/rest/authentication/login (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'))) 2020-07-15 11:00:55,594 INFO pid=23822 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2020-07-15 11:00:55,594 INFO pid=23822 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2020-07-15 11:00:55,595 INFO pid=23822 tid=MainThread file=base_modinput.py:log_info:295 | {"tenant": "XXXXX/login.aspx", "username": "XXX", "password": "XXX", "role": "Service Desk Analyst"} 2020-07-15 11:00:55,610 ERROR pid=23822 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 376, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 994, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connection.py", line 394, in connect ssl_context=context, File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/adapters.py", line 449, in send timeout=timeout File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='XXXXXX', port=443): Max retries exceeded with url: /login.aspx/api/rest/authentication/login (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ism_service_requests_input.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/input_module_ism_service_requests_input.py", line 44, in collect_events auth_token = ism.authenticate(base_url=opt_tenant,username=opt_username,password=opt_password,role=opt_role, api_key=opt_api_key, helper=helper, verify=opt_verify) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ism.py", line 91, in authenticate response = requests.post(base_url + '/api/rest/authentication/login',data=json.dumps(payload),headers=headers,verify=verify) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/api.py", line 116, in post return request('post', url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-ivanti-ism/bin/ta_ivanti_ism/aob_py3/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='XXXX', port=443): Max retries exceeded with url: /login.aspx/api/rest/authentication/login (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'))) Does anyone have any idea what the problem is..   Regards
Old dated report is not ran due to the maintenance activity. can some one help how  it be executed for 12th  date and added to the summary index.
Hi- We are indexing JSON data into Splunk. We push the data once every 24 hours. The Rest API will not give "Delta:", on every run but returns all data. So we don't have a way to recognize what is u... See more...
Hi- We are indexing JSON data into Splunk. We push the data once every 24 hours. The Rest API will not give "Delta:", on every run but returns all data. So we don't have a way to recognize what is updated and what is same. Time stamp is also same. The JSON is huge. In this case what is the best way to Index? Delete old data update all? or is there a way to Archive existing data and update new Json data? etc? Thanks Suren
how to calculate the device’s uptime value e.g time delta means time between  (uptime < 1800) up to next (uptime < 1800). (note: if the next uptime is not found then just calculate time up to end per... See more...
how to calculate the device’s uptime value e.g time delta means time between  (uptime < 1800) up to next (uptime < 1800). (note: if the next uptime is not found then just calculate time up to end period).    
Hi, I'm a bit stuck with a data transformation. I got it to a point where all the columns and values are in the right order, but the rows are offset for each column. I need to find a way to move th... See more...
Hi, I'm a bit stuck with a data transformation. I got it to a point where all the columns and values are in the right order, but the rows are offset for each column. I need to find a way to move the all the values in subsequent columns up to the top so they align. See below example of before/after data alignment I want to realize. Thanks for ideas!         f1 f2 f3 f4 ---------------------------- foo bar baz x y a b 1 2 3 4 f1 f2 f3 f4 ---------------------------- foo x a 1 bar y b 2 baz 3 4              
Hi, we have build a new app internally. We use different CSV-Files to save relevant data. Is there any possibility to ignore this files during packaging the app on development system or is there a ... See more...
Hi, we have build a new app internally. We use different CSV-Files to save relevant data. Is there any possibility to ignore this files during packaging the app on development system or is there a flag we can set on production system that these csv files are not overwritten during update? Because the csv files have different content on development and production system. Thanks for your support Kathrin
Hi, Is it possible to create Indexed-Fields with the help of  collect Command from the splunk  search ?  
Hello. Again, these lookups ). The hardest thing about queries. The request itself is the identification of users who logged in not from their workstation. index=windows user!=*$  |search (EventC... See more...
Hello. Again, these lookups ). The hardest thing about queries. The request itself is the identification of users who logged in not from their workstation. index=windows user!=*$  |search (EventCode=4776 OR EventCode=4624) |transaction user startswith=(EventCode="4624") endswith=(EventCode="4776") |lookup workst_user hostname as Source_Workstation OUTPUT user as login |table _time,EventCode,user,Source_Network_Address,Source_Workstation,dest_nt_host,name,status,dest,Logon_Type,Logon_Process Fields from Source_Workstation and user events are compared. Fields from the hostname and login list workst_user. The comparison itself - machines are compared among themselves, and users are among themselves. If any of the comparisons is incorrect, the output of non-matching fields in the event is incorrect. How to build the right look from these conditions ?
Dear all, I have a clustering environment (3 Search Heads + Deployer), on the deployer the default account activity is working and the required lookups are there and filled, but on the search clus... See more...
Dear all, I have a clustering environment (3 Search Heads + Deployer), on the deployer the default account activity is working and the required lookups are there and filled, but on the search cluster member (search heads), its not working and it seems merging is not being done correctly, i tried many thing but with no luck Here is what it shows on the search heads:   Please help Thanks
Hi, I am trying to plot the response time values against _time field. I am aware of the timechart and stats command which i can use to calculate the average or percentile but what i would want is to ... See more...
Hi, I am trying to plot the response time values against _time field. I am aware of the timechart and stats command which i can use to calculate the average or percentile but what i would want is to plot the actual values over time. I have the below query where I want field responseTime on y-axis vs _time on x-axis with actual values and not the average. Is that possible to do without using transforming commands? index=test host="serverer-p*" RESPONSE "uri=[/checkout/my-app]" | rex field=_raw"\[(?<responseTime>[^\s]+)"