All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'ved been having issues with getting "CPU utilization" to up on the Windows infrastructure dashboard.  I found that when i click on the Windows entities and move onto a single windows machine itself... See more...
I'ved been having issues with getting "CPU utilization" to up on the Windows infrastructure dashboard.  I found that when i click on the Windows entities and move onto a single windows machine itself i see all pertinent data for CPU utilization, but for some reason i cannot get it to show on the dashboards as a graph.    I have it set as the key indicator on the overview dashboard, it shows nothing, whereas the other key values show data (memory, network, and disk utilization). key info - 4 windows hosts (all same issue, shows N/A for CPU utilization) - i have manipulated the search job schedule  - entity discovery search is enabled, and manipulated the savedsearches.conf, gave it more time. - set correct Index in macros in SA-ITOA - checked on the _meta field in the windows stanza and the entity_type::windows_host is all there.  - perfmon::CPU is all there. So its weird why i am getting N/A for cpu utilization on the windows entity overview page and the infrastructure overview dashboard.  Any ideas would be greatly appreciated.   
help ! 
I have been experiencing issues with getting the Splunk Universal Forwarder agent installed on AIX 7.1 and 7.2 servers. The issue I am having is that after installation of the Splunk UF agent and sta... See more...
I have been experiencing issues with getting the Splunk Universal Forwarder agent installed on AIX 7.1 and 7.2 servers. The issue I am having is that after installation of the Splunk UF agent and starting the "splunkd" daemon, it runs for a few seconds and then dies (see details below). Has anyone had this issue? And if so, can you provide insight, or a resolution?   root@PA-CLMLD001:/: root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7733308 18350184   0 10:48:09  pts/0  0:00 grep splunkd root@PA-CLMLD001:/: /usr/bin/startsrc -s splunkd 0513-059 The splunkd Subsystem has been started. Subsystem PID is 7405708. root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7405712  2752706 103 10:51:31      -  0:01 splunkd --nodaemon -p 8089 _internal_exec_splunkd     root 11403272 18350184   0 10:51:35  pts/0  0:00 grep splunkd     root 22544524  7405712   0 10:51:33      -  0:00 [splunkd pid=7405712] splunkd --nodaemon -p 8089 _internal_exec_splunkd [process-runner] root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7733300 18350184   0 10:51:46  pts/0  0:00 grep splunkd root@PA-CLMLD001:/:     Thanks, Mel
Hi, I searched a lot and found no answer. I have data with the above timestamp and I want to convert it into local time. extract="year, month, day, hour, minute, second, zone" with  (\d{4})-(\d... See more...
Hi, I searched a lot and found no answer. I have data with the above timestamp and I want to convert it into local time. extract="year, month, day, hour, minute, second, zone" with  (\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(\S+)\s+ works OK when time zone is given in the form of "+0000", yet not with "UTC". Is there something like "litzone" available? Thanks in advance Volkmar
Hi Splunkers, I'm searching about the best way to send Mulesoft logs and events. Here on community I found What is the best way to integrate Mulesoft with Splunk cloud? that states, in a nutshell,... See more...
Hi Splunkers, I'm searching about the best way to send Mulesoft logs and events. Here on community I found What is the best way to integrate Mulesoft with Splunk cloud? that states, in a nutshell, to follow this approach.  It is clear enough how to implement it; my doubt is not related so to the procedure, but to another point. The above link show, let's say, a direct forwarding from Mulesoft to Splunk Indexer/environment.  What about if I plan to put a HF between Mulesoft and the indexers?  I mean: I have to follow the same procedure, simply creating the token on my HF and then, once data arrived from Mulesoft, forward them to Indexer by the usual way? Or there are some change I have to perform? Note: I supposed, as intermediate host, a HF for the token required generation. I supposed I cannot generate one on a UF. Feel free to correct me if I'm wrong.
Hi, Splunkers,    I have the following panel in my dashboard,  I need some different drilldown for the following 3 table columns: a b c 1234 abcd xyz   when I click 1234 (column a),... See more...
Hi, Splunkers,    I have the following panel in my dashboard,  I need some different drilldown for the following 3 table columns: a b c 1234 abcd xyz   when I click 1234 (column a),  I expect using 1234 as input to open another panel in same dashboard. when I click abcd  or xyz (column b or c ) , I expect using them as input to open different dashboard accordingly. how to code this condition in drilldown section ?    thx in advance.   Kevin
Hello, We have developed a dashboard to monitor the source of attacks. The dashboard works fine, however, referring to the image on the left, when I hover over the indicator, it displays the count... See more...
Hello, We have developed a dashboard to monitor the source of attacks. The dashboard works fine, however, referring to the image on the left, when I hover over the indicator, it displays the count. How can I modify the search to capture the count as displayed on right? Below is my query. index="qradar_offenses" | spath | iplocation src | geostats count by src Thanks in advance
Been tasked with deploying a highly available and scalable setup of Splunk in AWS. I've looked briefly at two methods currently, which is deploying a clustered approach to search heads and indexers... See more...
Been tasked with deploying a highly available and scalable setup of Splunk in AWS. I've looked briefly at two methods currently, which is deploying a clustered approach to search heads and indexers on EC2 instances and also looked at using the Kubernetes Operator for Splunk to achieve the same. The questions I have regarding this are below...   Does anyone have experience with this and what deployment method would you recommend? With your recommended approach can you autoscale components or does this need to be scaled manually? Best way to get data into Splunk with enterprise?   Would really appreciate any advice you could offer! Thank you.
I've tried several times now but I can't get Splunk Enterprise to install on my Windows 10. I even tried an older version with no success. 
Hi! I have a set of python scripts running every night as part of an automation process.  These scripts output data into the _internal index that is restricted for a large portion of the work for... See more...
Hi! I have a set of python scripts running every night as part of an automation process.  These scripts output data into the _internal index that is restricted for a large portion of the work force.  However some teams should be able to view the output from these scripts as it would help them troubleshoot why hosts fail.  I was hoping to solve the access issue by having a scheduled search run every night collecting the output of the python scripts and using that report as a base search in a new dashboard powered by dashboard studios.  I would then just use the chain search function for creating the tables and reports that staff with less privileges could  use to access the otherwise unavailable information.  But my problem is that I can not get the base search to work.  Not matter what I put under the key  App: "Put the name of the app holding the saved search here" When I use the "open in search function" from the dashboard, I notice that whatever I put under the context of "app", splunk does not care, it always just resolves to "undefined". Here is an example of the code from dashboard studios:  { "type": "ds.savedSearch", "options": { "ref": "internal_base_search", #this should be the name of the report "app": "search" #The app where the report exist. }, "name": "Saved Search Data Source From S&R" } So no matter what I do, the url I get using the "open in search" function, it points to:  <hostname>:8000/en-US/app/undefined/search?s=Internal%20base%20search%20Clone and if I manually change it to the app that holds the report it finds it just fine, for example:  <hostname>:8000/en-US/app/search/search?s=Internal%20base%20search%20Clone Has anyone had a simlar issue?  This to me feels like a bug and that "APP" key in the json formatted source code does not resolve correctly.
Our splunk system has never had usage above 6GB for a 10 GB license.  Two weeks ago, the usage jumped to 30GB and on weekends jumps to almost 100GB.  We are locked from searches of course.  Nothing i... See more...
Our splunk system has never had usage above 6GB for a 10 GB license.  Two weeks ago, the usage jumped to 30GB and on weekends jumps to almost 100GB.  We are locked from searches of course.  Nothing in the system has changed as far as I am aware.  I can pull up a usage report for 30 days and it shows that all this usage is being recorded in the default / main index.  The histogram looks the same for each week, about 30GB per day and over twice that on weekends since this issue has started.  What could cause this?  How can I detect what is sending to default / main index when I can't do any searches? The deployment hasn't changed since 2019.  There are about 30 servers with forwarders.
I am working on an app by using the splunk python SDK and trying to generate logs using logging library. I used all logging levels like: info, warning, error, debug. (ex: logging.error("Checking ... See more...
I am working on an app by using the splunk python SDK and trying to generate logs using logging library. I used all logging levels like: info, warning, error, debug. (ex: logging.error("Checking error")) But no logs are generating in any of the file. can anyone please help me on the above issue?
Hello Splunkers. I would like to ask for some advice from you, as  we are planning to replace a lot of rsync scripts that we use to distribute apps to all of our deployment servers. We have an arch... See more...
Hello Splunkers. I would like to ask for some advice from you, as  we are planning to replace a lot of rsync scripts that we use to distribute apps to all of our deployment servers. We have an architecture of 5 different tenants, that are pretty much completly isolated from eachothers. Because of that, we have one deployment server in each tenant. To centrally manage all this tenants, we have one "master" server, where we keep all our splunk configuration (apps, serverclasses etc.), and uses scripts based on rsync to push them out to the other deployment servers. I have an impression of that using tools like ansible or puppet etc. has become the "industry standard" of the way of handling such big Splunk multi-tenant enviroments. Found this presentation from CONF19, held by Splunk themself, that shows how to utilize ansible to achieve this: FN2048.pdf (splunk.com) As of what i understand, the alternative to using an 3rd party tools (ie. ansible) for this, would be to use a "Master/Slave" configuration for the deployment servers, having the master deployment server to push apps to "/opt/splunk/etc/deployment-apps/" to other slave deployment servers with such config: [serverClass:secondaryDeploymentServersDeploymentApps] targetRepositoryLocation = $SPLUNK_HOME/etc/deployment-apps (source: https://community.splunk.com/t5/Deployment-Architecture/How-to-set-up-Multiple-Deployment-Servers-Configuration/m-p/45392 ) We want to get rid of all theese scripts for syncing indexers, standalone search heads, search head clusters and UF's, so we are trying to find the best way. My question is, is there any advantages or disadvantages with theese two models? The "splunk only" method of doing this, doesnt seem to be nearly as popular using ansible? In advance, a bit thank you for any advice
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not ge... See more...
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not getting the excepted results when using format command or $field_name, inputlookup host.csv - consists of list of hosts to be monitored main search    index=abc source=cpu sourcetype=cpu CPU=all [| inputlookup host.csv ] | eval host=mvindex(split(host,"."),0) | stats avg(pctIdle) AS CPU_Idle by host | eval CPU_Idle=round(CPU_Idle,0) | eval warning=15, critical=10 | where CPU_Idle<=warning | sort CPU_Idle sub-search [search index=abc source=top | dedup USER | return $USER]  there is a field host , which is common in both, ,the events from index=abc source=cpu sourcetype=cpu does not contain a USER field, since the USER field is there when source=top, not in source=cpu
I want to find all spans of a particular service whose duration is greater than a particular amount. The only duration control that I can find limits the search to traces whose entire duration (of wh... See more...
I want to find all spans of a particular service whose duration is greater than a particular amount. The only duration control that I can find limits the search to traces whose entire duration (of which the duration of the span for the service I'm interested in is only a part) is in the specified range.  Is there a way to do what I want?
We have created an experiment in MLTK and published a model for it, is there a way other viewers can see the experiment?  Everyone seems to be able to see only their own experiments when navigating t... See more...
We have created an experiment in MLTK and published a model for it, is there a way other viewers can see the experiment?  Everyone seems to be able to see only their own experiments when navigating to the experiments tab.  I would have expected to see a Permissions option in the Manage drop down menu.
Hi all, I'm very new to Splunk, but have had some success using Dashboard Studio to display storage aggregate capacity. I have a SizeUsed field which gives me the % full of the aggregate at vario... See more...
Hi all, I'm very new to Splunk, but have had some success using Dashboard Studio to display storage aggregate capacity. I have a SizeUsed field which gives me the % full of the aggregate at various points in time.  I have set the queryParameters to earliest="-365d", and latest="now". This is the search I am using to display the current % full in a SingleValue chart.   index="lgt_netapp_prod"  source type="netapp:aggregate:csv"  Name="type_ctry_2000_h01_n01_fsas_02" | timechart last(SizeUsed) span=3d I also have an Area chart on the same dashboard showing the growth mapped out over 12 months. I would like to calculate the number of days till the aggregate is full, using the daily growth rate of the aggregate over a 12 month period. The logic for this, could be something like: dailyGrowth = (last(SizeUsed) - first(SizeUsed))/365 capacityRemaining = 100 - last(SizeUsed) daysTillFull = capacityRemaining / dailyGrowth Unfortunately, I havent been able to figure out the syntax which would allow me to use the values in this way, and then display the result in a chart. Is it possible someone could point me in the right direction here?  It would be a real feather in my cap if I could make this work for my employers. Cheers.....
Hi all. I currently experiencing an issue where simple strings won't provide any events while two weeks ago I had. Doesn't matter the time frame. Tried "All time" and still zero events. So, I wis... See more...
Hi all. I currently experiencing an issue where simple strings won't provide any events while two weeks ago I had. Doesn't matter the time frame. Tried "All time" and still zero events. So, I wish to see if there is an issue with an index being disable or not working properly.   Is there a search query I can use to find these indexes?
I have a pie chart displaying the top 10 ip address for the past 60 minutes, and I'm trying to figure out how to then be able to click that bit of the pie chart, to then open a new window relevant in... See more...
I have a pie chart displaying the top 10 ip address for the past 60 minutes, and I'm trying to figure out how to then be able to click that bit of the pie chart, to then open a new window relevant information about that specific ip address instead of all the IP addresses in the pie chart
splunk>enterprise を使用しています。 ログ収集対象者の所属部署別で Deployment Server(サーバークラス)を作成し 該当するサーバクラスへクライアント追加しています。 サーチ欄で検索すると、全てのクライアントのログが検索せれてしますのですが これを特定のDeployment Server(サーバークラス)に所属するクライアントのログのみ 検索したい... See more...
splunk>enterprise を使用しています。 ログ収集対象者の所属部署別で Deployment Server(サーバークラス)を作成し 該当するサーバクラスへクライアント追加しています。 サーチ欄で検索すると、全てのクライアントのログが検索せれてしますのですが これを特定のDeployment Server(サーバークラス)に所属するクライアントのログのみ 検索したいのですが、そういった条件付けは可能でしょうか? よろしくお願いいたします。