All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we on-boarded an application recently, Now we are seeing there are 100K aggregation issues(Log level= WARN) and 30k timestamp issues(Log Level=WARN) yesterday from one source, we are monitoring that ... See more...
we on-boarded an application recently, Now we are seeing there are 100K aggregation issues(Log level= WARN) and 30k timestamp issues(Log Level=WARN) yesterday from one source, we are monitoring that source from last 10 days. we have similar events and formatting. The maximum number of events coming from that source is not more than 5k per day Do i need to ignore these Warnings? What will cause these issues? will it affect our environment? I don't know where to start looking from.. Can some one help! Thank you for support Splunkers!!!
I am struggling to fetch the data between curly brackets . Have tried multiple rex searches, however still not getting the required output :- Message in log is as below: msg=Call to https://hostn... See more...
I am struggling to fetch the data between curly brackets . Have tried multiple rex searches, however still not getting the required output :- Message in log is as below: msg=Call to https://hostname/rs/cf/webservice/user/authn failed. Status code: 401, response: {"statusCode":"Code","message":"Authentication failed"}|exception=| I want to everything after response till exception. Can anyone help with this please?
I have 2 separate searches. search1 = 17 results search2 = 20 results Key column that exists in both searches is "target_id". How do I show all results containing the target_id that are in sea... See more...
I have 2 separate searches. search1 = 17 results search2 = 20 results Key column that exists in both searches is "target_id". How do I show all results containing the target_id that are in search1 but not in search2? How can I solve this using multisearch, join, or subsearch or is there a better way? Search 2 acts like a a filter.. I dont want to see any results in search1 that has the key column in search2.
I've configured the Azure Add-on (2.0) on Splunk Enterprise 8.0.2, but it doesn't appear to be getting past initialization. In debug mode, I just get: 2020-03-14 00:00:16,223 INFO pid=1832 tid=Mai... See more...
I've configured the Azure Add-on (2.0) on Splunk Enterprise 8.0.2, but it doesn't appear to be getting past initialization. In debug mode, I just get: 2020-03-14 00:00:16,223 INFO pid=1832 tid=MainThread file=connectionpool.py:new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-03-14 00:00:16,917 INFO pid=1832 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-03-14 00:00:17,504 INFO pid=1832 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-03-14 00:00:18,467 INFO pid=1832 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-03-14 00:00:19,477 DEBUG pid=1832 tid=MainThread file=base_modinput.py:log_debug:286 | _Splunk Getting proxy server. 2020-03-14 00:00:19,478 INFO pid=1832 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled! 2020-03-14 00:00:19,478 INFO pid=1832 tid=MainThread file=client_abstract.py:init:161 | u'eventhub.pysdk-ea3d48b9': Created the Event Hub client 2020-03-14 00:00:19,478 DEBUG pid=1832 tid=MainThread file=message.py:init:109 | Deallocating 'AMQPValue' 2020-03-14 00:00:19,479 DEBUG pid=1832 tid=MainThread file=message.py:init:109 | Destroying 'AMQPValue' 2020-03-14 00:00:19,480 DEBUG pid=1832 tid=MainThread file=client.py:open:234 | Opening client connection. 2020-03-14 00:00:19,480 DEBUG pid=1832 tid=MainThread file=init.py:initialize:157 | Initializing platform. ...lather, rinse, repeat... Nothing in python.log either. I configured my connection string as: Endpoint=sb://.servicebus.windows.net/;ShareAccessKeyName=RootManageShareAccessKey;SharedAccessKey= Event Hub Name: adlogs Index: eventhub All other settings are defaults. I've gone through the Azure config and all looks good as well. Any ideas why it would stop after "initializing platform" with no errors and then just restart on the next interval?
Hi there, Please advise what do I need to do to configure Solaris Logs to Splunk and what are the defaults ports use and can the ports be customized? I'm new to Solaris and Splunk. Please he... See more...
Hi there, Please advise what do I need to do to configure Solaris Logs to Splunk and what are the defaults ports use and can the ports be customized? I'm new to Solaris and Splunk. Please help .
Dears Thanks A lot for helping Already. i have 2 heavy forwarders(HF) and one Indexer(AIO) Im facing this issue for the first time,(HF-1) is not forwarding logs to AIO , though HF-2 is sendi... See more...
Dears Thanks A lot for helping Already. i have 2 heavy forwarders(HF) and one Indexer(AIO) Im facing this issue for the first time,(HF-1) is not forwarding logs to AIO , though HF-2 is sending normally to the AIO and i can search the logs . The thing is i tried telnet on both sides it did connect, it seems there is no network problem, firewall is down, SElinux is down below are some logs on the HF-1 03-14-2020 02:00:54.097 +0300 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group primary_indexers has been blocked for 230 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 03-14-2020 01:23:22.056 +0300 WARN TcpOutputProc - Read operation timed out expecting ACK from 10.244.2.100:9997 in 300 seconds.
I installed the app and was able to add servers and retrieve data. I found what looks to be a value for all of the connections to the F5, but I need to report on just the APM connections. Anyone ha... See more...
I installed the app and was able to add servers and retrieve data. I found what looks to be a value for all of the connections to the F5, but I need to report on just the APM connections. Anyone have any idea where that can be found? Or perhaps some other way of retrieving? Our F5's are at version 14. The app only says it supports up to v12, but it seems to be working. index=f5 STATISTIC_CLIENT_SIDE_CURRENT_CONNECTIONS statistics_types=STATISTIC_CLIENT_SIDE_CURRENT_CONNECTIONS "get_all_statistics.virtual_server.name"="/Common/myservername" Returns what appears to be all connections for the server, the APM connections are about 1/2 of that. { [-] time_stamp: 0 type: STATISTIC_CLIENT_SIDE_CURRENT_CONNECTIONS value: { [-] high: 0 low: 138 } } Thanks
Hi, Can i run a search which specify that these type of logs are blocked in palo alto firewall by specific policy. if i can do that then please let me know.
Need to power off spunk server tonight for emergency power maintenance. Does anyone know where i can get the shutdown procedure?
I would like to know the best formula to calculate my storage needs on non-clustered splunk environment for setting up Smartstore. Also would like to know if smartstore is a recommended option for no... See more...
I would like to know the best formula to calculate my storage needs on non-clustered splunk environment for setting up Smartstore. Also would like to know if smartstore is a recommended option for non-clusters indexer deployment? Can we set both hot and cache storage on same mount point?
We have cases where we need to run an alert at 8 am on Monday and at 9 am on Tuesday, meaning, at irregular times. Is there a way to specify such cases using the cron way or some other method?
We have thousands of UFs running as Unix root and we have discussions whether to keep it like that or run the UFs as a distinct user. Therefore my question is - what are the pros and cons of runni... See more...
We have thousands of UFs running as Unix root and we have discussions whether to keep it like that or run the UFs as a distinct user. Therefore my question is - what are the pros and cons of running thousands of UFs as root?
Hello All , I have a field called component with values A,B,C,D. Now I want to alert if there is a new value coming in for instance E .then I need to alert with the new value showing Thanks in ... See more...
Hello All , I have a field called component with values A,B,C,D. Now I want to alert if there is a new value coming in for instance E .then I need to alert with the new value showing Thanks in advance
Per DUO support, Splunk DUO connector 1.1.6b and 1.1.6 do not support v2 auth logs; therefore, the connector won't be able to pull those 2FA device IP's in the logs. When will the updated Splunk DUO ... See more...
Per DUO support, Splunk DUO connector 1.1.6b and 1.1.6 do not support v2 auth logs; therefore, the connector won't be able to pull those 2FA device IP's in the logs. When will the updated Splunk DUO connector which can support v2 auth logs be available? Thanks.
I have a sample data as below Assigned Analyst Assigned Date John 2018-03-09 00:00:00.0... See more...
I have a sample data as below Assigned Analyst Assigned Date John 2018-03-09 00:00:00.0 2018-03-23 00:00:00.0 2018-03-30 00:00:00.0 2018-04-16 00:00:00.0 2018-04-24 00:00:00.0 2018-04-26 00:00:00.0 2018-05-03 00:00:00.0 Joe 2017-03-22 00:00:00.0 2017-03-23 00:00:00.0 2017-05-01 00:00:00.0 2017-05-02 00:00:00.0 2017-05-18 00:00:00.0 2017-05-23 00:00:00.0 Now, I would like to find the time span for each Analyst based on the earliest and latest values of Assigned Date in Years and Days. Assigned Date is simply the date on which the ticket was assigned. Ticket Number is the unique identifier which I didn't add in the sample data. Thanks in advance
Hello plp, I am making an alert, that export a csv , the problem here is when this .csv is exported, only have rw permissions and i want to have rw-r. I make a script that convert this file with ... See more...
Hello plp, I am making an alert, that export a csv , the problem here is when this .csv is exported, only have rw permissions and i want to have rw-r. I make a script that convert this file with the permissions i want, but is dont working. I have read all the doc of configurating scripted alerts, but i cant resolve this problem. Can anyone helpme?
Having a difficult time to get this add-on to actually pull message trace logs from exchange online, and was wondering what role/access the actual account needs to be set at in the exchange admin con... See more...
Having a difficult time to get this add-on to actually pull message trace logs from exchange online, and was wondering what role/access the actual account needs to be set at in the exchange admin console? Or maybe I'm just missing something entirely with the configuration of this add-on. Log messages from /opt/splunk/var/log/splunk/ta_ms_o365_reporting_ms_o365_message_trace.log show successful connections & get requests: DEBUG pid=31238 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (1): reports.office365.com DEBUG pid=31238 tid=MainThread file=connectionpool.py:_make_request:400 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-03-11T22:36:43.072002Z'%20and%20EndDate%20eq%20datetime'2020-03-11T23:36:43.072002Z' HTTP/1.1" 200 None DEBUG pid=31238 tid=MainThread file=base_modinput.py:log_debug:286 | Next URL is https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-03-11T22%3A36%3A43.072002Z'%20and%20EndDate%20eq%20datetime'2020-03-11T23%3A36%3A43.072002Z'&$skiptoken=1999 DEBUG pid=31238 tid=MainThread file=base_modinput.py:log_debug:286 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-03-11T22%3A36%3A43.072002Z'%20and%20EndDate%20eq%20datetime'2020-03-11T23%3A36%3A43.072002Z'&$skiptoken=1999 INFO pid=31238 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled!
Hi All, we have been informed our current/valid internal SHA2 Intermediate certificate is being replaced with a newer version. The Certificate management team has asked that the new one sit along ... See more...
Hi All, we have been informed our current/valid internal SHA2 Intermediate certificate is being replaced with a newer version. The Certificate management team has asked that the new one sit along side the older/current one on all systems which need it. I'm aware of the importance of the certificate chain, and was wondering if just inserting the new internal SHA2 Intermediate certificate details in our chain after the current/older one for the current .pem certificates would be valid? Example below for our Splunk forwarders: -----BEGIN CERTIFICATE----- >>> splunkforwarder host -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- >>> splunkforwarder.key -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- >> CURRENT Intermediate details -----END CERTIFICATE------ -----BEGIN CERTIFICATE----- >> INSERT NEW Intermediate details here -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- >>> ROOT details -----END CERTIFICATE----- thanks
In my modular input I want to update a configuration setting between runs so I don't poll for the same data again and again. The code I use to do this is below. It's using the splunklib.client librar... See more...
In my modular input I want to update a configuration setting between runs so I don't poll for the same data again and again. The code I use to do this is below. It's using the splunklib.client library (https://docs.splunk.com/DocumentationStatic/PythonSDK/1.1/client.html#splunklib.client.Service.login) to make a REST connection to Splunk, using the session held by the modular input, it grabs the config section and calls item.update to update the query_param. The call to item.update causes the entire modular input to reload... and in the logs I get a "Winsock error 10054", most likely because of the reload. I'm on Splunk 8.0.1 running Python3. I can't find any documentation on this behavior. I tried not using the SESSION_TOKEN, and instead connecting with username/password, but same issue. I also tried adding service.logout at the end, but that code never executes. Is this expected? try: args = {'host':'localhost','port':SPLUNK_PORT,'token':SESSION_TOKEN} service = Service(**args) item = service.inputs.__getitem__(STANZA[len(APP_NAME) + 3:]) item.update(query_params = dictionary_to_params(url_args)) except Exception as e: logging.error("Error trying to update args: %s" % str(e)) One last note. In the logs I get the following warning. I assume this is being causes by the app restarting. 03-16-2020 08:52:25.232 -0400 WARN HttpListener - Socket error from 127.0.0.1:56292 while accessing /servicesNS/nobody/launcher/data/inputs/my_app/my_data_input/: Winsock error 10054
i have 117 sites listed from homeland security. i need to check if any of our machine have visited them. We have McAfee web gateway logs funneled into splunk. What's the best way to go about looking ... See more...
i have 117 sites listed from homeland security. i need to check if any of our machine have visited them. We have McAfee web gateway logs funneled into splunk. What's the best way to go about looking for that activity?