All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a field name VULN in index=ABC sourcetype=XYZ. We need to know, if new VULN show up in 48hrs of data compared to 1 month ago. Basically, we need to see how many new VULNs are in data c... See more...
Hi, I have a field name VULN in index=ABC sourcetype=XYZ. We need to know, if new VULN show up in 48hrs of data compared to 1 month ago. Basically, we need to see how many new VULNs are in data compared to last month and how many unique IPs are affected.  Thanks in-advance!!!
Gentlemen, We are using  https://splunkbase.splunk.com/app/1914/  Splunk is not extracting all the fields visible in the Windows Sysmon events. It leaves out lot of fields.    This is what i sus... See more...
Gentlemen, We are using  https://splunkbase.splunk.com/app/1914/  Splunk is not extracting all the fields visible in the Windows Sysmon events. It leaves out lot of fields.    This is what i suspect is the cause, but need someone to advise if i am on the right track. In the events, the SourceType shows as: WinEventLog:Microsoft-Windows-Sysmon/Operational   However, On my Search Head when i go to Settings >> SourceTypes >> ALL , i see a different name:   XmlWinEventLog:Microsoft-Windows-Sysmon/Operational There is no source type here by the name of WinEventLog:Microsoft-Windows-Sysmon/Operational   Is the conflict of different sourcetype names causing the issue ?  What needs to be done to fix ? Things i tried... 1.  Added the following in inputs.conf to make it format as per XML   [WinEventLog://Microsoft-Windows-Sysmon/Operational] renderXML=true sourcetype=XMLWinEventLog:Microsoft-Windows-Sysmon/Operational   This did extract all the fields but ended up showing the  events in XML format .  How can i keep the default /original format of displaying Windows events yet make it extract all the fields ? Thanks all
Hello, I'm attempting to set the panel back ground color to transparent witin a couple Choropleth panels in Dashboard Studio. However, nothing seems to work... I have attempted to set the backgroun... See more...
Hello, I'm attempting to set the panel back ground color to transparent witin a couple Choropleth panels in Dashboard Studio. However, nothing seems to work... I have attempted to set the background using the following: "transparent": true,  "backgroundColor": "rgba(0, 0, 0, 0)", "backgroundColor": "transparent", I can successfully change the color to white or other colors. but not transparent.  Thanks in advance!
Hello all, I am facing issue in collecting data from two of the hosts.e are using rsyslog to injest data. Logs are getting updated in the logdump of the HF but im not able to see the logs in splunk. ... See more...
Hello all, I am facing issue in collecting data from two of the hosts.e are using rsyslog to injest data. Logs are getting updated in the logdump of the HF but im not able to see the logs in splunk. We can see logs from other hosts , but having issues with two particular hosts with high log volume. I dont see any error/warning related to queueing. While checking the status of rsyslog service, we can see the below errors. invalid or yet-unknown config file command 'TCPServerAddress' - have you forgotten to load a module? [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/3003 ] Could not create tcp listener, ignoring port 515 bind-address (null). [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/2077 ] module 'imtcp.so' already in this config, cannot be added [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/2221 ] Any suggestions/feedback is welcomes. Thanks
Hello All , What is the best way to collect and monitor system health and performance metrics from various security devices and endpoint devices. Log collection will be SNMP based or API Please r... See more...
Hello All , What is the best way to collect and monitor system health and performance metrics from various security devices and endpoint devices. Log collection will be SNMP based or API Please recommend any Add-on or App if available for infrastructure health monitoring.  Is Splunk ITSI recommended for this requirement   TIA
We recently updated our Splunk App to use the latest SDK, and since then, we've been running into this issue where our custom configuration page (where users enter an API key for our service) fails t... See more...
We recently updated our Splunk App to use the latest SDK, and since then, we've been running into this issue where our custom configuration page (where users enter an API key for our service) fails to load with the below error.  The error seems to indicate that the issue is with the field having no value at the initial install (which has always been the default state).  Any suggestions on how we can address this?   {"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/handler.py\", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/handler.py\", line 303, in _format_response\n masked = self.rest_credentials.decrypt_for_get(name, data)\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/credentials.py\", line 203, in decrypt_for_get\n data[field_name] = clear_password[field_name]\nTypeError: 'NoneType' object is not subscriptable\n\". See splunkd.log/python.log for more details."}]}  
Hi, I need to create a chart to show top categories per time. At the moment, the timechart I am getting is placing the time axis on the y-axis and the categories on the x-axis. In addition, ther... See more...
Hi, I need to create a chart to show top categories per time. At the moment, the timechart I am getting is placing the time axis on the y-axis and the categories on the x-axis. In addition, there are over 50 possible categories and the user should be seeing the top 20 categories with the respective count for each time period. So I would imagine 20 line of the graph, no? Here is what I currently have:   Can you please help? Many thanks, Patrick
Hello, We have several cases, where we relate the data between panels. On the example screenshots below, we have: 1/ Chart with the number of database threads in time, and the sum of threads per ... See more...
Hello, We have several cases, where we relate the data between panels. On the example screenshots below, we have: 1/ Chart with the number of database threads in time, and the sum of threads per time unit involved in the execution of the particular SQL statement (SQL hash) - represented by the different colors:    2/ Pie chart showing the portion of the particular SQL statement / hash in the given time span: Is there any easy way to keep the colors for the same SQL statements/hashes between the two panels? Kind Regards, Kamil
I am trying to create a report that will show month over month reporting for web service average response time as a percentage against a threshold sourcetype="web_logs" `web_resp_index` * | bu... See more...
I am trying to create a report that will show month over month reporting for web service average response time as a percentage against a threshold sourcetype="web_logs" `web_resp_index` * | bucket _time span=month | stats count as total_count count(eval(resp_time>=500)) as fail_count count(eval(resp_time<500)) as success_count count(eval(resp_time=="")) as null_count by source _time | eval success_percent=round((resp_count/total_count)*100,2) | eval _time=strftime(_time, "%b") | Fields - total_count fail_count success_count null_count I now have : source _time success_percent www1 Jan 94.6 www1 Feb 93.2 www1 Mar 94.3 www2 Jan 98.5 www2 Feb 92.4 www2 Mar 84   I am looking to transpose and group so that I have 1 row per source and monthly columns Source Jan Feb Mar www1 94.6 93.2 94.3 www2 98.5 92.4 84
Hello, We had a case recently, where following some issues with the apache certificate, the scheduled jobs got stuck in status "parsing" and after the apache issues have been cleared, the job was a... See more...
Hello, We had a case recently, where following some issues with the apache certificate, the scheduled jobs got stuck in status "parsing" and after the apache issues have been cleared, the job was already expired. As a consequence, no other jobs with the same could be triggered, they got skipped. How would I avoid this kind of situation in the future? Is there any way to delete all expired jobs, which are not in status "done"? Would that be a solution? Kind Regards, Kamil
We have installed the following Splunk alert manager app on our search head. During the installation we created new index on search head to store the fired alert data  https://splunkbase.splunk.com... See more...
We have installed the following Splunk alert manager app on our search head. During the installation we created new index on search head to store the fired alert data  https://splunkbase.splunk.com/app/2665/  https://splunkbase.splunk.com/app/3365/ We are running all our saved searches/alerts from Search head not from the indexers. Can you please tell me do we need to create index(alerts) on indexers as well? We started receiving lic warnings on search head. Mar 21, 2022, 12:00:00 AM (8 hours ago) This pool has exceeded its configured poolsize=1 bytes. A CLE warning has been recorded for all members server_namexxx auto_generated_pool_enterprise enterprise cle_pool_over_quota   Licensing warnings will be generated today. See License Manager for details. Learn more.3/21/2022, 8:03:41 AM License warning issued within past 24 hours: Mon Mar 21 00:00:00 2022 EDT. Refer to the License Usage Report view on license master '' to find out more.3/21/2022, 8:03:41 AM Daily indexing volume limit exceeded. Per the Splunk Enterprise license policy in effect, search is disabled after 45 warnings over a 60-day window. Your Splunk deployment is subject to license enforcement. See License Manager for details.  
During the integration of Sailpoint initially got error for the certificate as below. https://community.splunk.com/t5/All-Apps-and-Add-ons/Certificate-error-with-SailPoint-Adaptive-Response-app/m-p/... See more...
During the integration of Sailpoint initially got error for the certificate as below. https://community.splunk.com/t5/All-Apps-and-Add-ons/Certificate-error-with-SailPoint-Adaptive-Response-app/m-p/520271#M63493   After adding CA signed certificate in aob.py3 directory getting below error. 2022-03-21 23:07:22,432 ERROR pid=15767 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_sailpoint/bin/splunk_ta_sailpoint/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/Splunk_TA_sailpoint/bin/sailpoint_identityiq_auditevents.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py", line 143, in collect_events headers = build_header(helper, identityiq_url, client_id, client_secret) File "/opt/splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py", line 109, in build_header return build_oauth2_header(helper, identityiq_url, client_id, client_secret) File "/opt/splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py", line 94, in build_oauth2_header access_token = json.loads(token_body)['access_token'] File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Hi,   Could you please help me is it possible to create alert splunk Trail version 8.2.5.
Hi, I need some help setting up a dashboard that will allow us to closely monitor login activity of certain users and the IP address' they use to ensure we don't have any exploiters trying to acces... See more...
Hi, I need some help setting up a dashboard that will allow us to closely monitor login activity of certain users and the IP address' they use to ensure we don't have any exploiters trying to access our systems.   Another thing I would like to do, if possible, is to create a dashboard where we can input a username, and then it will show us the login data for that user over a certain period of time. Regards, Aidan Smith
I am a bit confused here which the controller data. In this configuration data, if i put the Tier and Nodes as such in my C++ application main function, which tier will relate to which node? cons... See more...
I am a bit confused here which the controller data. In this configuration data, if i put the Tier and Nodes as such in my C++ application main function, which tier will relate to which node? const char APP_NAME[] = "SampleC"; const char TIER_NAME[] = {"SampleCTier1", "SampleTier2"}; const char NODE_NAME[] = {"SampleCNode1", "SampleNode2", "SampleNode3"}; const char CONTROLLER_HOST[] = "controller.somehost.com"; const int CONTROLLER_PORT = 8080; const char CONTROLLER_ACCOUNT[] = "customer1"; const char CONTROLLER_ACCESS_KEY[] = "MyAccessKey"; const int CONTROLLER_USE_SSL = 0;
Hi experts, I would appreciate some design help with a query where I want to see all src_ip's querying for two different domains within X minutes of time interval during a longer time period.  ... See more...
Hi experts, I would appreciate some design help with a query where I want to see all src_ip's querying for two different domains within X minutes of time interval during a longer time period.      
i am using transaction command to check the start time and end time of a transaction.  I have used: | transaction TxnId startswith="NEW TXN" endswith= "statusY" keeporphans=true | eval starttime=_t... See more...
i am using transaction command to check the start time and end time of a transaction.  I have used: | transaction TxnId startswith="NEW TXN" endswith= "statusY" keeporphans=true | eval starttime=_time | eval endtime=_time+duration | eval starttime=strftime('starttime', "%Y-%m-%d %H:%M:%S.%3N") | eval endtime=strftime('endtime', "%Y-%m-%d %H:%M:%S.%3N") | table TxnId starttime endtime I want to check if all transactions have start time and end time for the success rate. Now even if the endswith="statusY" is not there, it is calculating its end time.  What can i do to make sure there should be no end time if the condition endswith="statusY" is not there. And if the condition of both startswith and endswith is met table should show status as success or else blank.  
We are considering to calculate specific filed (list)  during the indexing  the calculation will be -   | eval list=if(match(dhost,"\.[\w]{2,3}\.[\w]{2}:?[\d]?"),"mozilla","iana") 1. What is the pe... See more...
We are considering to calculate specific filed (list)  during the indexing  the calculation will be -   | eval list=if(match(dhost,"\.[\w]{2,3}\.[\w]{2}:?[\d]?"),"mozilla","iana") 1. What is the performance impact  ? 2. how it should be done ?
  Trying to setup alert for two scenarios as metioned below: Scenario 1: to determine if the connection between Xyz and the abc service is healthy, check for the string “IEX API Call Successfully ... See more...
  Trying to setup alert for two scenarios as metioned below: Scenario 1: to determine if the connection between Xyz and the abc service is healthy, check for the string “IEX API Call Successfully got agent schedules data” This message occurs in batches roughly every 5 minutes. Good threshold might be to alert if This message is not seen in >= 10 minutes. Scenario 2: Another item to check would be the connection between the service and the xyz host. The String for that is “Schedule successfully posted to the provider API”. The cadence for those messages is the same so an absence of > 10 minutes may be a good place to start. Below are the samnple splunk events. I would like to setup an alert if these keywords event does not appears in last 10 minutes then send e-mail alert. Please help. 3/21/22 4:44:13.000 AM 2022-03-21 04:44:13 [pool-6-thread-2] INFO c.i.e.i.s.c.i.AgentResourceServiceImpl - IEX API Call Successfully got agent schedules data. 3/21/22 4:44:13.000 AM 2022-03-21 04:44:13 [pool-6-thread-2] INFO c.i.e.i.s.c.i.AgentResourceServiceImpl - IEX API Call Successfully got agent schedules data. 3/21/22 4:44:13.000 AM 2022-03-21 04:44:13 [pool-6-thread-2] INFO c.i.e.i.s.c.i.AgentResourceServiceImpl - IEX API Call Successfully got agent schedules data. 3/21/22 4:44:13.000 AM 2022-03-21 04:44:13 [pool-6-thread-2] INFO c.i.e.f.a.w.s.i.SchedulesServiceImpl - Schedule successfully posted to the provider Api.  
Hi Guys,   I am looking search thru, splunk index for presence of multiple conditions as below.   index = "ind_name" return object|bin _time span=1d | where log like "%'feature1': {'result':... See more...
Hi Guys,   I am looking search thru, splunk index for presence of multiple conditions as below.   index = "ind_name" return object|bin _time span=1d | where log like "%'feature1': {'result': '-9999%" | stats count as cnt_feature1_NOT_NULL by _time | appendcols [search index = "ind_name" return object |bin _time span=1d | where log like "%'feature1': {'result': '%" | stats count as cnt_feature1_NOT_NOT_NULL by _time] | appendcols [search index = "ind_name" return object |bin _time span=1d | where log like "%'feature2': {'result': '-9999%"| stats count as cnt_feature2_NULL by _time] | appendcols [search index = "ind_name" return object |bin _time span=1d | where log like "%'feature2': {'result': '%" | stats count as cnt_feature2_NOT_NOT_NULL by _time] |   I have to search for multiple expressions and count them (20) of them, is there a better way to search than appendcols ?    Thank you