All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, We have a message trace set up to continuously monitor on a clients account, however recently this has just suddenly stopped working.   I've deleted and recreated the trace, which worked f... See more...
Hi all, We have a message trace set up to continuously monitor on a clients account, however recently this has just suddenly stopped working.   I've deleted and recreated the trace, which worked for a few days, but now when I do this, it does not work at all; the only way of getting the data is with a manual index once.   Currently I've the settings set to the following: Interval: 3600 Index: Main Input Mode: Continuously Monitor Query window size: 60 delay throttle: 10   Do I have something in these settings that could be causing an issue?
Hi folks I have a windows deployment server on prem. I have 20 windows servers for which I installed the Universal Forwarder and pointed it to the deployment server over port 8089.   Where within ... See more...
Hi folks I have a windows deployment server on prem. I have 20 windows servers for which I installed the Universal Forwarder and pointed it to the deployment server over port 8089.   Where within the deployment server can I confirm that the 20 windows servers are all reporting in?
Is there a way to specify a schedule using a cron expression (or via some other means) that specifies something like 'every 2nd business day of the month'?  Thank you.
Morning guys. I need to export a content i've inserted into a table. The data model file is a json like file, it looks nice after my field extraction and the splunk table looks awesome but when i... See more...
Morning guys. I need to export a content i've inserted into a table. The data model file is a json like file, it looks nice after my field extraction and the splunk table looks awesome but when i try to export it with the usual button below the table on the dashboard, it exports unformated, and i needed it to be nicely formated.  Do you guys have an ideia of how i can export it in a nice looking table format? and as a .csv? Thank you all in advance   Table on splunk this is the table on splunk, on excell it appears as column A only and all fiends in diferent lines
Please help with running dedup on this search SPL for detecting skipped searches. To remove duplicates. Thank u   `dmc_set_index_internal` search_group=dmc_group_search_head search_group=* sourcety... See more...
Please help with running dedup on this search SPL for detecting skipped searches. To remove duplicates. Thank u   `dmc_set_index_internal` search_group=dmc_group_search_head search_group=* sourcetype=scheduler (status="completed" OR status="skipped" OR status="deferred")             | stats count(eval(status=="completed" OR status=="skipped")) AS total_exec, count(eval(status=="skipped")) AS skipped_exec by _time, host, app, savedsearch_name, user, savedsearch_id | where skipped_exec > 0
We're trying to run a search but are getting these errors while doing so: 3 errors occurred while the search was executing. Therefore, search results might be incomplete. [indexer1] Error in 'I... See more...
We're trying to run a search but are getting these errors while doing so: 3 errors occurred while the search was executing. Therefore, search results might be incomplete. [indexer1] Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1624492800. [indexer2] Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1624510800. [indexer3] Events might not be returned in sub-second order due to search memory limits. See search.log for more information. Increase the value of the following limits.conf setting:[search]:max_rawsize_perchunk. We've increased our max_rawsize_perchunk limit on the indexers but are still seeing these errors. We're running Splunk Enterprise 8.0.0 How would we make this error go away? TIA!
Hi All, I have a CSV file with the below data, trying to push to splunk.   Example -  Thu JUN 24  15:27:52 +08 2021,name1,address1,Thu  AUG14  15:27:52  2021,Active Thu JUN 24  15:27:52 +08 2021... See more...
Hi All, I have a CSV file with the below data, trying to push to splunk.   Example -  Thu JUN 24  15:27:52 +08 2021,name1,address1,Thu  AUG14  15:27:52  2021,Active Thu JUN 24  15:27:52 +08 2021,name1,address1,Thu JUN 15  05:15:52  2021,Active in props i'm using below  [test_app]     SHOULD_LINEMERGE= FALSE FIELD_DELIMETER=, HEADER_FIELD_DELIMETER=, FIELD_NAMES=Time,names,address,creationtime,status TIMESTAMP_FIELDS=creationtime TZ=Asia/Singapore TIME_FORMAT=%a %b %d %H:%M:%S %Y         fourth fields as timestamp, but Splunk not able to push data.   but using the first field in CSV as time field I can able to push data to Splunk using the below stanza what might be the cause can someone explain.       SHOULD_LINEMERGE= FALSE FIELD_DELIMETER=, HEADER_FIELD_DELIMETER=, FIELD_NAMES=Time,names,address,creationtime,status TIMESTAMP_FIELDS=Time TZ=Asia/Singapore TIME_FORMAT=%d%m%Y%H%M    
Hello, I have a strange problem in ITSI after een upgrade. The configured services are not showing up in the Service Analyzer Tile view and they are grey-ed out in the tile view. When i open the ser... See more...
Hello, I have a strange problem in ITSI after een upgrade. The configured services are not showing up in the Service Analyzer Tile view and they are grey-ed out in the tile view. When i open the service with a deep dive, i can see my KPI with data and a Service Health Score.  Does anybody know what's going on?
I would like to extract two groups which are TestName and Model. In the second row, the TestName is connected to Model cannot be extracted as two different groups. Link for regex: regex101: build, ... See more...
I would like to extract two groups which are TestName and Model. In the second row, the TestName is connected to Model cannot be extracted as two different groups. Link for regex: regex101: build, test, and debug regex
Hello, I have recently found there is a strange difference between lookup and inputlookup commands.   |makeresults | eval uid="asdf" | lookup mydata uid |makeresults | eval uid="asdf" | join uid... See more...
Hello, I have recently found there is a strange difference between lookup and inputlookup commands.   |makeresults | eval uid="asdf" | lookup mydata uid |makeresults | eval uid="asdf" | join uid [| inputlookup mydata]   The "mydata" lookup is a kvstore collection, with the following columns uid, name, address, fields I was expecting these two queries to have the same results, but no. It seems the column "fields" is an array and it's returning a lot of data when used with the inputlookup command , which is not the case with the first (lookup) query. The lookup results are like this: _key | uid  | name | address | fields ...  | asdf | john | yes     | (empty) The inputlookup results are like this: _key | uid  | name | address | fields.town | fields.country ...  | asdf | john | yes     | chicago     | usa I didn't find any documentation about this. Your input is welcomed. Thanks
Hello everyone, We have already data from NetApp in our Splunk. Netapp filter ---> rsyslog - forward directly to splunk via tcp ---> Splunk 2 weeks ago I have installed the Splunk Add-on for NetAp... See more...
Hello everyone, We have already data from NetApp in our Splunk. Netapp filter ---> rsyslog - forward directly to splunk via tcp ---> Splunk 2 weeks ago I have installed the Splunk Add-on for NetApp Data ONTAP. Since then I try to add data in this add-on but without success. I have followed this docs and stuck on Step 2: https://docs.splunk.com/Documentation/AddOns/released/NetApp/Setup I don't know what should I fill in "Splunk Forwarder URI" It should be the Forwarder IP but how can I find out which port? And which further settings I need to set? I hope you can help me. Best nan2021          
Hello I have a volume with a filesystem mountpoint as VolumePath. The page "volume Detail: Instance" on monitoring console say me that the "volume usage" on this volume is ~26'400GB but the "df... See more...
Hello I have a volume with a filesystem mountpoint as VolumePath. The page "volume Detail: Instance" on monitoring console say me that the "volume usage" on this volume is ~26'400GB but the "df" command on operating system say me that the usage is ~21'416GB I have enabled "tsidxreduction" recently. Any Idee why do I have a about 5TB more by Splunk usage as by Filesystem usage?
Hi,  I installed Splunk Add-on for Unix and Linux addon in Splunk Cloud. During the initial configuration of the add-on it gives the error "There was an unexpected problem while saving the inputs. P... See more...
Hi,  I installed Splunk Add-on for Unix and Linux addon in Splunk Cloud. During the initial configuration of the add-on it gives the error "There was an unexpected problem while saving the inputs. Please reload the page and try again." when trying to save the configuration. Looking at the http request errors I found that not having edit_monitor and edit_scripted permissions seems to be the problem  example error: [SPLUNKD] You (user=sc_admin) do not have permission to perform this operation (requires capability: edit_scripted). In the user and role permission settings,  I didn't find these permissions to be set to the related role.  Appreciate any help to solve the issue or any workarounds.  Thanks, Janaka
Hi  I have the data that looks like this user, ip, (metrics kv pairs) ---- sample results for search --  user=user1,ip=10.10.10.10,key1=10,key2=30 user=user2,ip=10.10.10.10,key1=5,key3=30 user=... See more...
Hi  I have the data that looks like this user, ip, (metrics kv pairs) ---- sample results for search --  user=user1,ip=10.10.10.10,key1=10,key2=30 user=user2,ip=10.10.10.10,key1=5,key3=30 user=user1,ip=10.10.10.12,key2=10,key3=30,key4=2,key5=14,key6=4 user=user1,ip=10.10.10.10,key5=22 ------------- How do I pull out the metrics - key1- key6 and aggregate by the metrics ? say if i wanted a pie chart with all totals of all the keys for a given IP/ user (say IP and username are dashboard input tokens)
How can I join two fields from different sourcetypes that don't share the same name ? The content of the two fields is strictly the same though. For more details, here are the two queries I would ... See more...
How can I join two fields from different sourcetypes that don't share the same name ? The content of the two fields is strictly the same though. For more details, here are the two queries I would like to "join" :         index=dynatrace* sourcetype="dynatrace:entity" hostGroup.meId="HOST_GROUP-*" | spath | stats values(discoveredName) as HostName by hostGroup.name         In this one, I'm searching the entities to list all my host names by host group.         index=dynatrace_hp sourcetype=dynatrace:metrics timeseriesId="com.dynatrace.builtin:host.mem.used" | stats avg(value) as avgCPU, values(unit) as Unit by hostName         In this one, I'm searching the metrics to list all cpu loads by host name. I need to do it like this as I don't have access to the host group from the data in the metrics sourcetype. Already tried to make some joined queries but I can't find a working solution. Any idea ?
Hello Folks, so I am trying to install Deep Learning Toolkit on our dockerized Splunk enviroment. For the installation and the setup we need to connect to the docker socket Running docker in d... See more...
Hello Folks, so I am trying to install Deep Learning Toolkit on our dockerized Splunk enviroment. For the installation and the setup we need to connect to the docker socket Running docker in docker is highly unusual. Is there a way to achive that without installing docker in the running splunk container? Thank you! Cheers!  
Hello! I've created a Dashboard with some panels. Now I need to import new data and create a new index, but the data format are the same as before so I want to use the same dashboard and panels to d... See more...
Hello! I've created a Dashboard with some panels. Now I need to import new data and create a new index, but the data format are the same as before so I want to use the same dashboard and panels to display the same visualization. Is there any ways for me to do so without writing each query again and then save to dashboard? Thanks a lot!
HI, While running a query I am giving timings as below  23-06-2021 01:00 to 23-06-2021 04:00 AM The timings can change as per the requirement. I wanted to prepare a comparison like if I am runnin... See more...
HI, While running a query I am giving timings as below  23-06-2021 01:00 to 23-06-2021 04:00 AM The timings can change as per the requirement. I wanted to prepare a comparison like if I am running below query for 23rd June, I should be able to get the data for the 23rd June and given timings and also need data for 22nd June (means the previous day) between 08:00 PM to 09:00 PM.   "LLT*" Status!=200 | stats count by qname   This will give me the comparison with peak hour which is 08:00 PM to 09:00 PM. Another example, if i am giving timings in the dashboard like 15th June from 10:00 AM to 11:00 AM, I should get data for 15th June and also 14th June 08:00 PM to 09:00 PM. The previous day is one day earlier than the date given in the dashboard and the timings of the previous day are constant all the time. Can you please help me in writing this query? Thanks, SG
Hello Splunkers, Lets assume my dashboard contains 10 panels and all are having the same base search associated with them, now I need an export option (downward arrow, to export when we hover over p... See more...
Hello Splunkers, Lets assume my dashboard contains 10 panels and all are having the same base search associated with them, now I need an export option (downward arrow, to export when we hover over panel) for all individual panels. Also, that downward arrow can export either in csv, json or xml, what if my panel visualization is having some custom visualization like pie chart, line graph or donut, etc etc.. How can i export these panels individually, with the same visual?  
Hi, I just realized a problem that had surfaced with the installation of Splunk v. 8.2.0. I have a number of alerts executing external scripts in ~splunk/bin/scripts and this system has worked fin... See more...
Hi, I just realized a problem that had surfaced with the installation of Splunk v. 8.2.0. I have a number of alerts executing external scripts in ~splunk/bin/scripts and this system has worked fine fine for years. Yesterday I realized the scripts were being executed, but at the end of the chain the Perl-scripts tries to execute a system-command, which simply fails with return code 134. Naturally I could execute the scripts interactively as splunk user without trouble. Only executing them from splunkd would fail. After hours of headbanging I had to work around the system commands, and write the commands to a queue and separately handle the queue with an external system. Now it works, but I find it disturbing, that all of a sudden something that has worked fine starts to fail without a clear reason. Time history associates the problems to the installation of Splunk v. 8.2.0. I won't bother filing a bug report, because it would be impossible for me to show beyond reasonable doubt that it is the Splunk 8.2.0 that actually is broken. This is just one of those issues of mixing Python, bash, Perl and se-linux to name a few well known candidates to blame. It is always someone else's fault. Anyone else experiencing similar issues? It would be delightful to avoid fighting this problem again at some other point. Br, Petri