All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have been looking around and I have seen some people having issues with certain dependencies and after dealing with issues related to the usage of manually added modules by dragging them into the a... See more...
I have been looking around and I have seen some people having issues with certain dependencies and after dealing with issues related to the usage of manually added modules by dragging them into the aob_py3 folder I have been trying to find information on what the "official and proper" way to have add on builder add in/use additional libraries as needed. I feel like manually moving the needed libs/modules into the aob_py3 folder can't be the best solution but it might be the only one... I have had some odd solutions to my splunk instance not properly working with modules/libs added in using the above method, and that was..... to install by doing the following.   cd /opt/splunk/bin sudo su splunk ./splunk cmd python #to install packages, in package_names you can add a , and append multiple. import pip; package_names=['grpcio'] ; pip.main(['install'] + package_names + ['--upgrade'])   in other words, going straight to the local python env that splunk uses, acting as splunk, and directly installing the package here. big problem with this solution.... it can't be packaged up to then be imported using add-on builder. I have tried using the same module shown above, grcp, in a lot of ways but it has some sort of issue with locating or running in general unless I do it as shown above, side note there are many others imported just fine it seems like this one in particular handles things differently as it operates a bit differently compared to other modules. I know that with docker container you can specify the dependencies to be installed etc and you can work with a nice little config to define these things and I just wanted to reach out on here as the product page specified to do so and see if there is any context I can be given to find a better way to resolve this? I thought maybe if I install the dependencies the way I show above and then use add on builder to create a new app that maybe it would pack in that lib into that aobe folder to then allow for packaging up but it doesn't work that way.
We are currently attempting to locate why a scheduled search cannot run. We were finally able to locate the search and the reason why. The search that cannot run is a default search that is apart of ... See more...
We are currently attempting to locate why a scheduled search cannot run. We were finally able to locate the search and the reason why. The search that cannot run is a default search that is apart of a default dashboard within the  License Monitor for Splunk application that is located on Splunk Base. The search is failing because a macro is either misspelled or does not exist, which in this case it appears that it does not exist.    The macro 'index_assignment_notable_management' does not exist. I was wondering if this macros is perhaps located within another app or if it is no longer contained within the app?   https://splunkbase.splunk.com/app/3521/
  This is after a restart from a windows vm that I installed the forwarder on and I put info in the outputs.conf   this is my outputs.conf file i tried to make it the same for windows and... See more...
  This is after a restart from a windows vm that I installed the forwarder on and I put info in the outputs.conf   this is my outputs.conf file i tried to make it the same for windows and linux currently box 1 is linux vm and box 2 is windows vm Ihave alled traffic on 8089,9997 and so on i can ping linux host and what I believe to be the ip of splunk. so first question is whats that error telling me (what do I need to change)? If my linux ifconfig comes back as 10.1.1.2 but my nslookup of httpS://dinkdonk   comes back as 10.1.10.20 which am I using as the ip for forwarding ip address  like when I do this on either linux or windows that ip should be the same right ? see below ./splunk add forward-server 10.10.10.10:9997 ./splunk set deploy-poll 10.10.10.10:8089 Also just making sure in this case my linux vm is my DS and search head and indexer right?    
Hi all, When I visit the Apps page on my search head server and select "Upgrade to ..." on any of my applications that require an upgrade I get the following 500 Inernal Server Error: /opt/splunk/v... See more...
Hi all, When I visit the Apps page on my search head server and select "Upgrade to ..." on any of my applications that require an upgrade I get the following 500 Inernal Server Error: /opt/splunk/var/log/splunk/web_service.log 2022-08-23 21:11:36,576 ERROR [63054287cc7fe2c82754d0] error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 383, in default return route.target(self, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-98>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 40, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-96>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 118, in check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-95>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 166, in validate_ip return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-94>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 245, in preform_sso_check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-93>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 284, in check_login return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-92>", line 2, in start File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/appinstall.py", line 232, in start remote_app = self.getRemoteAppEntry(appid); File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/appinstall.py", line 106, in getRemoteAppEntry return en.getEntity('/apps/remote/entriesbyid', sbAppId) File "/opt/splunk/lib/python3.7/site-packages/splunk/entity.py", line 277, in getEntity serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 655, in simpleRequest raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/services/apps/remote/entriesbyid/treeview-viz This happens on any application I try to upgrade via the GUI - I can upgrade them ok when doing it manually or by uploading the tgz archive. I can use the GUI just fine on any other server (Index/Heavy Forwarder). Does anyone know what could be causing this? Any help would be appreciated.
Hi All, I have a dashboard initially loads with a hidden panel. A drilldown click in table on different panel shows the hidden panel. But, I want the panel to be hidden each time user clicks on subm... See more...
Hi All, I have a dashboard initially loads with a hidden panel. A drilldown click in table on different panel shows the hidden panel. But, I want the panel to be hidden each time user clicks on submit button on the top and the hidden panel should only be displayed when user clicks on the event for drilldown. Currently, the hidden panel stays visible with old data when user fills input panel fields with different data and press the submit button. Thus, I need your help to resolve the issue by handing it in dashboard's XML. Thank you
Hi All, In Splunk dashboard (classic dashboard). I have a table which displays sparkline based on event_count.  I need to change the sparkline to bar chart or column chart in each row. Thus, I need... See more...
Hi All, In Splunk dashboard (classic dashboard). I have a table which displays sparkline based on event_count.  I need to change the sparkline to bar chart or column chart in each row. Thus, I need your help with the same. Thank you
How do I fix low disk space in enterprise indexer. Please comment back on how to fix.
Hello, I have a report I have having issues with. It is for CPU Usage on laptops.  I have tried the Stats perc() and the stats avg(). I get a lot of false positives, for insistence if a laptop get ... See more...
Hello, I have a report I have having issues with. It is for CPU Usage on laptops.  I have tried the Stats perc() and the stats avg(). I get a lot of false positives, for insistence if a laptop get powered on for a couple of hours , there would be 8 data points, since the default is pull CPU usage every 15 mins.  So 4 of the data points could be high CPU usage but that is explained but the bootup, patching and other scripts running. What we care about is  is consistent CPU usage. SO we are monitoring the data points and for every data point that goes over 70% CPU then add one to the count Then over a week a we only want to see per machine when have more then 70 data point going over 70%. The change I am having is also want to get total count of data points as well. so we can take the total data points and compare it to the High CPU Data points and get a  percentage of High Processor time   So this is the code I have and it works at telling me the data point over 70%. but when ever I try and play around with al adding a over all total I can not get it to work index=wss_desktop_perfmon sourcetype="wks:Perf_Processor" %_Processor_Time > 69 | stats count as CPULoad avg(%_Processor_Time) as %_Processor_Time by host | lookup local=true PrimaryUsers.csv host AS host OUTPUT host DeviceType FullName Location Address Model OSVer TotalPhysicalMemoryKB Email PrimaryUser Supervisor "Supervisor Email" | search Location IN ("GA1*", "GA7*", "GA9*") | where CPULoad > 70 | rename CPULoad as "High CPU DataPoint" Host High CPU DataPoint %_Processor_Time Computer1 97 78.54106664   Now would like to add in a total count of data points from %_Processor_Time 
I am relatively new to a company that has used Splunk Professional Services to spin up a Splunk Cloud environment before I was hired. The company IT has onboarded a lot of AWS, Azure, on-prem and ne... See more...
I am relatively new to a company that has used Splunk Professional Services to spin up a Splunk Cloud environment before I was hired. The company IT has onboarded a lot of AWS, Azure, on-prem and network devices so far. I’m trying to verify that they are in fact sending logs into the Splunk index so that I can eventually apply use cases and alerting on the logs as well as troubleshoot those hosts which aren’t sending but are supposed to be. There isn’t a Splunk resource in the company so I am trying my best to figure it as I go. (classic) The IT manager gave me a spreadsheet of hostnames and private IP addresses for all the devices which are forwarding logs. At first I thought I could run a search to just compare his list with logs received by hostname but I can’t figure that out. Here’s what I did instead. Over a 30-day search I run | metadata type=hosts index=* and I exported the results to a csv. I took the ‘hosts’ column (which was a combination of hostnames and IP addresses) from the export and did a diff against the IT managers list of hostnames/IP addresses and where it wasn’t found, presumed it had not sent logs during that time period. The inventory has about ~850 line items in total which are supposedly onboarded and I saw logs from about ~250. Obviously I am second guessing myself because of the delta. When I spot check some hostnames/IP addresses from the asset inventory spreadsheet from IT in Splunk, there are some that return no results, some that is just DNS or FW traffic from that server (so needs onboarding to get server logs) but others where I get results where the ‘host’ field is a cloud appliance (like Meraki) and the hostname or IP matches to other fields such as ‘dvc_up’, ‘deviceName’ or ‘dvc’ fields. This is really confusing the heck out of me and making me question if there is a better way. So, is there? How do you normally audit and verify that your logs are still being received into your Splunk instance? Thanks so much for your help and looking forward to learning!
Hi All, What is the best way to integrate Samba AD logs for user activity with Splunk Cloud?  
I use "the Splunk Phantom Remote Search app" to connect the Phantom to the Splunk Enterprise, it works fine until after migrating the Splunk indexer cluster to new servers, my Phantom stop forward lo... See more...
I use "the Splunk Phantom Remote Search app" to connect the Phantom to the Splunk Enterprise, it works fine until after migrating the Splunk indexer cluster to new servers, my Phantom stop forward logs to the Splunk Enterprise. I try to run "Test Connection" on the search setting page in Phantom, and the test connection works, then I try to reindex on the search setting page, and after reindexing, I found that the lost logs were sent into the Splunk, but It not automatically forward logs to Splunk Enterprise. my Phantom still stop forward logs until I reindex again.
Hi All, How to configure "Metrics add on for Infrastructure" application in Splunk enterprise as well in Splunk forwarder server? Appreciate if anyone can help on this.
Hi everyone,  I've found in the history similar questions and answer like "Search modes are just for the UI". But doesn't it really matter? So if I create and tune the search in the Verbose mode how... See more...
Hi everyone,  I've found in the history similar questions and answer like "Search modes are just for the UI". But doesn't it really matter? So if I create and tune the search in the Verbose mode how exactly will Splunk run the query? What kind of "search mode"?  I can't find that answer anywhere even in the docu page ( https://docs.splunk.com/Documentation/Splunk/9.0.1/Search/Changethesearchmode ) Thanks a lot for some details.
I'm an end user! It appears to be just my user account. we dont seem to be able to find the answer When I do any search (such as index="med") I get  "Error in 'litsearch' command: Unable to parse ... See more...
I'm an end user! It appears to be just my user account. we dont seem to be able to find the answer When I do any search (such as index="med") I get  "Error in 'litsearch' command: Unable to parse the search: unbalanced parentheses." When I go through the logs I was surprised to see that such a simple search resulted in litsearch (index="med" index=nessus ((source="SI - EZproxy" orig_sourcetype="nessus:scan") OR sourcetype="nessus:scan") | lookup Device_Details nt_host as host-fqdn output bunit | search bunit="Medicine") | litsearch (index="med" index=nessus sourcetype=nessus:scan | lookup Device_Details nt_host as host-fqdn output bunit | search bunit="Medicine") | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1660905790.000000 lt=1660906690.000000 remove=true max_count=1000 max_prefetch=100 While the parenthesis balance, I read somewhere they they have to balance within the pipe (|), which they don't.  We do indeed have a nessus index and several months ago someone started work on getting nessus reporting dashboard in splunk to work (still ongoing). However I am not sure why a simple search on index=Med would reference "nessus".  Does the litsearch command look wrong? Where is it picking up the conf to produce such a command and can it be fixed? I have tried to create a table view of  "med" and I get no entries rather than an error. I did that because it would be good to see the index to know its not a permission error.  
Filed name = pluginText <plugin_output>Information about this scan : Nessus version : 10.3.0 Nessus build : 20080 Plugin feed version : 202208222232 Scanner edition used : Nessus Scanner O... See more...
Filed name = pluginText <plugin_output>Information about this scan : Nessus version : 10.3.0 Nessus build : 20080 Plugin feed version : 202208222232 Scanner edition used : Nessus Scanner OS : LINUX Scanner distribution : es7-x86-64 Scan type : Normal Scan name : Host_Discovery & OS_Identification Scan policy used : 93e1da98-656c-5cd5-933b-ce6665fc0486-1939724/Host_Discovery_Scan_03292022 Scanner IP : 10.102.10.1 Port scanner(s) : nessus_syn_scanner Port range : sc-default Ping RTT : 11.921 ms Thorough tests : no Experimental tests : no Plugin debugging enabled : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None Display superseded patches : yes (supersedence plugin launched) CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing : Yes Scan Start Date : 2021/8/10 1:55 UTC can duration : 63 sec </plugin_output>
I am in the process of mapping our use-cases that we have within Splunk / Enterprise Security to Mitre, and am also trying to organize them a bit. I'm using Splunk Security Essentials 3.6 and have ... See more...
I am in the process of mapping our use-cases that we have within Splunk / Enterprise Security to Mitre, and am also trying to organize them a bit. I'm using Splunk Security Essentials 3.6 and have a question concerning Any Splunk Logs. On the Content Introspection screen - some of my use-cases are organized into different categories such as AWS, Application Load Balance, Authentication, Anti-virus etc. However, a large percentage of my content just appears under the Any Splunk Logs heading - how can I change this?? I even went back to the Data Inventory screen... and manually defined some of the indexes and sourcetypes to other categories, but nothing has changed.   Help!!
I can't figure out the correct syntax for the second eval statement or what else I should use instead of eval. I know the second eval statement syntax is incorrect, I am just placing it here so you c... See more...
I can't figure out the correct syntax for the second eval statement or what else I should use instead of eval. I know the second eval statement syntax is incorrect, I am just placing it here so you can understand what I am trying to accomplish. | eval FieldA=if(like(computername, "ABC%"), "Yes", "No") | eval FieldB = if FieldA="No", then FieldB = FieldC, else FieldB = FieldA Thank you!
How can I display the subsearch_scheduler. index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] source=/opt/ovz/splunk/var/log/splunk/scheduler.log... See more...
How can I display the subsearch_scheduler. index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] source=/opt/ovz/splunk/var/log/splunk/scheduler.log [search index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] log_level=ERROR component=SearchMessages sid=subsearch_scheduler* | table sid | dedup sid] | stats count values(savedsearch_name) dc(savedsearch_name) by user | sort - count
I have the record like this:     _time  id status  1        x     yes 1         x     no 2          x      yes 1          x      unknow    I want to return the record based on status ... See more...
I have the record like this:     _time  id status  1        x     yes 1         x     no 2          x      yes 1          x      unknow    I want to return the record based on status value: if status has yes ,then return the lasted row that has yes. if there is none yes value then I want the row with no,  if there is none yes or none no, return unknow row.
Disclaimer - Fairly New to Splunk I'm stuck on building a table for a dashboard. I would like to list a table of Computer Names with columns displaying the last 5min average values for CPU% / Mem... See more...
Disclaimer - Fairly New to Splunk I'm stuck on building a table for a dashboard. I would like to list a table of Computer Names with columns displaying the last 5min average values for CPU% / Mem% / DiskTransfers / etc The search is  index=azure sourcetype="mscs:azure:eventhub:vmmetrics" body.Computer=* body.ObjectName="Processor" | stats first(body.CounterValue) by body.Computer That gives me the last Processor value for each Computer. (I cant do 5min average - that can be a bonus point answer !) How would I add the same search into the table but with replacing the body.ObjectName field value for body.ObjectName="Memory"  and then  body.ObjectName="DiskTransfers"  and then combine that into one table . Thanks for helping