All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I... See more...
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I have not seen a drop in the Disk Size usage in the Data Model screen, do I need to Rebuild and Update the acceleration? If so in which order and are there any performance or other risks involved in doing so? Thanks, Regards,
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get... See more...
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get this information.
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facin... See more...
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facing the same issue. Anyone facing a similar problem and please let us know if there is a fix.
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynam... See more...
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynamic. It should have less/more value. And the value is in the html tag format. syntax: <p>"value"</p> What would be the best practices in this scenario? Should I go with the method of search time or via index time? Sample query: | makeresults | eval letter = "<p>A</p><p>B</p><p>C</p><p>D</p>" | eval letter = replace(letter,"<p>","") | eval letter = replace(letter,"</p>","__") | makemv delim="__" letter Expected output: letter A B C D
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_i... See more...
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_interval: 10s scrapers: # System processes metrics, disabled by default process:    (filter / exclude 0% process usage)
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User... See more...
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:25\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=25 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=42666 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   I need help to extract the underline value for fields name TCP. Sample:  TCP=101.11.10.01 Please help. Thanks.
HI, So, I have two clustered environments. I want to copy KO from one site to another. They need to have pretty much the same alerts, dashboards, etc. Thanks
Currently seeing issues after performing a certificate renewal.   Errors seen in splunkd.log   08-24-2022 00:58:03.942 +0000 ERROR SSLCommon - Can't read key file /opt/splunk/etc/auth/splunkw... See more...
Currently seeing issues after performing a certificate renewal.   Errors seen in splunkd.log   08-24-2022 00:58:03.942 +0000 ERROR SSLCommon - Can't read key file /opt/splunk/etc/auth/splunkweb/private.key errno=185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch. 08-24-2022 00:58:03.942 +0000 ERROR HTTPServer - SSL context could not be created - error in cert or password is wrong 08-24-2022 00:58:03.942 +0000 ERROR HTTPServer - SSL will not be enabled   The configuration for web.conf was validated in  Validated config in $SPLUNK_HOME/var/run/splunk/merged/web.conf and $SPLUNK_HOME /etc/system/local/web.conf sslPassword = <HASHED_PASSWORD> serverCert = $SPLUNK_HOME/etc/auth/splunkweb/server.pem privKeyPath = $SPLUNK_HOME/etc/auth/splunkweb/private.key   I confirmed that the sslPassword is valid by decrypting the password using /opt/splunk/bin/splunk show-decrypted --value <HASHED_PASSWORD> openssl rsa -in /opt/splunk/etc/auth/splunkweb/private.key  -noout -text <decripted_HASHED_PASSWORD> The private key opens correctly The following commands were run to validate the integrity of certificates openssl x509 -noout -modulus -in /opt/splunk/etc/auth/splunkweb/cert.pem | openssl md5 openssl x509 -noout -modulus -in /opt/splunk/etc/auth/server.pem | openssl md5 openssl rsa -noout -modulus -in /opt/splunk/etc/auth/splunkweb/private.key | openssl md5   All Values are the same Host has been rebooted recently and selinux is disabled
Hello!!   I've deployed a Splunk_TA_Stream to a Suse Enterprise 12 in order to capture queries from Informix DB 12.10FC14AEE, but I can't see any query. I have all other data sucha as SSH, DNS Qu... See more...
Hello!!   I've deployed a Splunk_TA_Stream to a Suse Enterprise 12 in order to capture queries from Informix DB 12.10FC14AEE, but I can't see any query. I have all other data sucha as SSH, DNS Queries, etc.  So do I need another conf in order to be capable to read that data?   Thnks!!
I have a lookup file called ipaddress.csv.  The column title in the file is ipaddress.  I want to search my logs for all of these ip addresses.  I know I need to use inputlookup to get the addresses ... See more...
I have a lookup file called ipaddress.csv.  The column title in the file is ipaddress.  I want to search my logs for all of these ip addresses.  I know I need to use inputlookup to get the addresses from the file, but I can't figure out how to then feed them to a search.   Thanks in advance
I have been looking around and I have seen some people having issues with certain dependencies and after dealing with issues related to the usage of manually added modules by dragging them into the a... See more...
I have been looking around and I have seen some people having issues with certain dependencies and after dealing with issues related to the usage of manually added modules by dragging them into the aob_py3 folder I have been trying to find information on what the "official and proper" way to have add on builder add in/use additional libraries as needed. I feel like manually moving the needed libs/modules into the aob_py3 folder can't be the best solution but it might be the only one... I have had some odd solutions to my splunk instance not properly working with modules/libs added in using the above method, and that was..... to install by doing the following.   cd /opt/splunk/bin sudo su splunk ./splunk cmd python #to install packages, in package_names you can add a , and append multiple. import pip; package_names=['grpcio'] ; pip.main(['install'] + package_names + ['--upgrade'])   in other words, going straight to the local python env that splunk uses, acting as splunk, and directly installing the package here. big problem with this solution.... it can't be packaged up to then be imported using add-on builder. I have tried using the same module shown above, grcp, in a lot of ways but it has some sort of issue with locating or running in general unless I do it as shown above, side note there are many others imported just fine it seems like this one in particular handles things differently as it operates a bit differently compared to other modules. I know that with docker container you can specify the dependencies to be installed etc and you can work with a nice little config to define these things and I just wanted to reach out on here as the product page specified to do so and see if there is any context I can be given to find a better way to resolve this? I thought maybe if I install the dependencies the way I show above and then use add on builder to create a new app that maybe it would pack in that lib into that aobe folder to then allow for packaging up but it doesn't work that way.
We are currently attempting to locate why a scheduled search cannot run. We were finally able to locate the search and the reason why. The search that cannot run is a default search that is apart of ... See more...
We are currently attempting to locate why a scheduled search cannot run. We were finally able to locate the search and the reason why. The search that cannot run is a default search that is apart of a default dashboard within the  License Monitor for Splunk application that is located on Splunk Base. The search is failing because a macro is either misspelled or does not exist, which in this case it appears that it does not exist.    The macro 'index_assignment_notable_management' does not exist. I was wondering if this macros is perhaps located within another app or if it is no longer contained within the app?   https://splunkbase.splunk.com/app/3521/
  This is after a restart from a windows vm that I installed the forwarder on and I put info in the outputs.conf   this is my outputs.conf file i tried to make it the same for windows and... See more...
  This is after a restart from a windows vm that I installed the forwarder on and I put info in the outputs.conf   this is my outputs.conf file i tried to make it the same for windows and linux currently box 1 is linux vm and box 2 is windows vm Ihave alled traffic on 8089,9997 and so on i can ping linux host and what I believe to be the ip of splunk. so first question is whats that error telling me (what do I need to change)? If my linux ifconfig comes back as 10.1.1.2 but my nslookup of httpS://dinkdonk   comes back as 10.1.10.20 which am I using as the ip for forwarding ip address  like when I do this on either linux or windows that ip should be the same right ? see below ./splunk add forward-server 10.10.10.10:9997 ./splunk set deploy-poll 10.10.10.10:8089 Also just making sure in this case my linux vm is my DS and search head and indexer right?    
Hi all, When I visit the Apps page on my search head server and select "Upgrade to ..." on any of my applications that require an upgrade I get the following 500 Inernal Server Error: /opt/splunk/v... See more...
Hi all, When I visit the Apps page on my search head server and select "Upgrade to ..." on any of my applications that require an upgrade I get the following 500 Inernal Server Error: /opt/splunk/var/log/splunk/web_service.log 2022-08-23 21:11:36,576 ERROR [63054287cc7fe2c82754d0] error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 383, in default return route.target(self, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-98&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 40, in rundecs return fn(*a, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-96&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 118, in check return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-95&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 166, in validate_ip return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-94&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 245, in preform_sso_check return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-93&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 284, in check_login return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-92&gt;", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/appinstall.py", line 232, in start remote_app = self.getRemoteAppEntry(appid); File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/appinstall.py", line 106, in getRemoteAppEntry return en.getEntity('/apps/remote/entriesbyid', sbAppId) File "/opt/splunk/lib/python3.7/site-packages/splunk/entity.py", line 277, in getEntity serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 655, in simpleRequest raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/services/apps/remote/entriesbyid/treeview-viz This happens on any application I try to upgrade via the GUI - I can upgrade them ok when doing it manually or by uploading the tgz archive. I can use the GUI just fine on any other server (Index/Heavy Forwarder). Does anyone know what could be causing this? Any help would be appreciated.
Hi All, I have a dashboard initially loads with a hidden panel. A drilldown click in table on different panel shows the hidden panel. But, I want the panel to be hidden each time user clicks on subm... See more...
Hi All, I have a dashboard initially loads with a hidden panel. A drilldown click in table on different panel shows the hidden panel. But, I want the panel to be hidden each time user clicks on submit button on the top and the hidden panel should only be displayed when user clicks on the event for drilldown. Currently, the hidden panel stays visible with old data when user fills input panel fields with different data and press the submit button. Thus, I need your help to resolve the issue by handing it in dashboard's XML. Thank you
Hi All, In Splunk dashboard (classic dashboard). I have a table which displays sparkline based on event_count.  I need to change the sparkline to bar chart or column chart in each row. Thus, I need... See more...
Hi All, In Splunk dashboard (classic dashboard). I have a table which displays sparkline based on event_count.  I need to change the sparkline to bar chart or column chart in each row. Thus, I need your help with the same. Thank you
How do I fix low disk space in enterprise indexer. Please comment back on how to fix.
Hello, I have a report I have having issues with. It is for CPU Usage on laptops.  I have tried the Stats perc() and the stats avg(). I get a lot of false positives, for insistence if a laptop get ... See more...
Hello, I have a report I have having issues with. It is for CPU Usage on laptops.  I have tried the Stats perc() and the stats avg(). I get a lot of false positives, for insistence if a laptop get powered on for a couple of hours , there would be 8 data points, since the default is pull CPU usage every 15 mins.  So 4 of the data points could be high CPU usage but that is explained but the bootup, patching and other scripts running. What we care about is  is consistent CPU usage. SO we are monitoring the data points and for every data point that goes over 70% CPU then add one to the count Then over a week a we only want to see per machine when have more then 70 data point going over 70%. The change I am having is also want to get total count of data points as well. so we can take the total data points and compare it to the High CPU Data points and get a  percentage of High Processor time   So this is the code I have and it works at telling me the data point over 70%. but when ever I try and play around with al adding a over all total I can not get it to work index=wss_desktop_perfmon sourcetype="wks:Perf_Processor" %_Processor_Time > 69 | stats count as CPULoad avg(%_Processor_Time) as %_Processor_Time by host | lookup local=true PrimaryUsers.csv host AS host OUTPUT host DeviceType FullName Location Address Model OSVer TotalPhysicalMemoryKB Email PrimaryUser Supervisor "Supervisor Email" | search Location IN ("GA1*", "GA7*", "GA9*") | where CPULoad > 70 | rename CPULoad as "High CPU DataPoint" Host High CPU DataPoint %_Processor_Time Computer1 97 78.54106664   Now would like to add in a total count of data points from %_Processor_Time 
I am relatively new to a company that has used Splunk Professional Services to spin up a Splunk Cloud environment before I was hired. The company IT has onboarded a lot of AWS, Azure, on-prem and ne... See more...
I am relatively new to a company that has used Splunk Professional Services to spin up a Splunk Cloud environment before I was hired. The company IT has onboarded a lot of AWS, Azure, on-prem and network devices so far. I’m trying to verify that they are in fact sending logs into the Splunk index so that I can eventually apply use cases and alerting on the logs as well as troubleshoot those hosts which aren’t sending but are supposed to be. There isn’t a Splunk resource in the company so I am trying my best to figure it as I go. (classic) The IT manager gave me a spreadsheet of hostnames and private IP addresses for all the devices which are forwarding logs. At first I thought I could run a search to just compare his list with logs received by hostname but I can’t figure that out. Here’s what I did instead. Over a 30-day search I run | metadata type=hosts index=* and I exported the results to a csv. I took the ‘hosts’ column (which was a combination of hostnames and IP addresses) from the export and did a diff against the IT managers list of hostnames/IP addresses and where it wasn’t found, presumed it had not sent logs during that time period. The inventory has about ~850 line items in total which are supposedly onboarded and I saw logs from about ~250. Obviously I am second guessing myself because of the delta. When I spot check some hostnames/IP addresses from the asset inventory spreadsheet from IT in Splunk, there are some that return no results, some that is just DNS or FW traffic from that server (so needs onboarding to get server logs) but others where I get results where the ‘host’ field is a cloud appliance (like Meraki) and the hostname or IP matches to other fields such as ‘dvc_up’, ‘deviceName’ or ‘dvc’ fields. This is really confusing the heck out of me and making me question if there is a better way. So, is there? How do you normally audit and verify that your logs are still being received into your Splunk instance? Thanks so much for your help and looking forward to learning!
Hi All, What is the best way to integrate Samba AD logs for user activity with Splunk Cloud?