All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My question is simple: which characters are allowed for the values of the metadata fields source and sourcetype ? I could not find any documentation on this.
We have a ton of indexes and need to better understand which ones have stopped receiving events so that we can report and alert on them. We have a Splunk Enterprise v7.3.3 distributed environment ... See more...
We have a ton of indexes and need to better understand which ones have stopped receiving events so that we can report and alert on them. We have a Splunk Enterprise v7.3.3 distributed environment with multiple (non-clustered) indexers, and non-pooled search heads configured in standalone mode. Our DSV, SH, and ES are each individual hosts and our ES is configured as a secondary SH. We manage index changes via CLI edits of indexes.conf, a deployment app, and redeployment of server classes. We currently use the below in a dashboard panel, which generates a list of all "0-count" indexes that haven't received events in over 24 hours, but as a static list, there's a lot of additional work to get a holistic view of what's changed and when. I'd prefer query logic over a new app, as we're already hoping to pare down some of (our own) 'bloat.' ## generates a list of all "0-count" indexes that haven't received events in over 24 hours... |tstats count where (index=* earliest=-24h latest=now()) by index |append [|inputlookup index_list.csv |eval count=0] |stats max(count) as count by index |where count=0 Thanks in advance!
The software support policy for Splunk Enterprise is now two years. My company has a policy to wait a few releases before upgrading any software to make sure that new features are stable. But then we... See more...
The software support policy for Splunk Enterprise is now two years. My company has a policy to wait a few releases before upgrading any software to make sure that new features are stable. But then we only have a year before that version moves out of support. How do we get in the sweet spot of Splunk Enterprise updates?
why does Splunk display empty fields in the table even though there are values there
Hello Splunkers, required yous assistance with a line break for below-mentioned logs at ],[ {"time":1581014469,"states":[["4b1803","SWR55X ","Switzerland",1581014469,1581014469,8.7818,46.8227,6... See more...
Hello Splunkers, required yous assistance with a line break for below-mentioned logs at ],[ {"time":1581014469,"states":[["4b1803","SWR55X ","Switzerland",1581014469,1581014469,8.7818,46.8227,6880.86,false,206.91,354.01,-7.8,null,7063.74,"1000",false,0],["3cf0a4","IFA509 ","Germany",1581014469,1581014469,7.9657,46.878,8534.4,false,143.86,32.44,0,null,8679.18,"5344",false,0],["3c6758","DLH1333 ","Germany",1581014469,1581014469,8.545,47.7009,11582.4,false,212.56,30.23,0,null,11681.46,"1030",false,0],["3c5442","DLH02J ","Germany",1581014469,1581014469,6.6594,46.3485,10363.2,false,226.41,39.01,0,null,10492.74,"1000",false,0],["3c658e","DLH15U ","Germany",1581014468,1581014469,9.0273,46.5254,10355.58,false,229.56,358.2,0,null,10347.96,"1000",false,0],["4a8159","SCW3P ","Sweden",1581014469,1581014469,6.9469,46.9315,8557.26,false,221.02,229.15,-10.08,null,8557.26,"0763",false,0],["440344","LDM74J ","Austria",1581014469,1581014469,10.1866,46.0682,5631.18,false,197.18,242.83,-14.96,null,5814.06,"4131",false,0] current props.conf used for above-mentioned logs is (REST mechanism is used for data integration) [ geomonitor] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n,])\[" NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Structured disabled=false pulldown_type=true Thanks in advance
Hi, I am trying to plat a graph of response time over a period of time. I am using timewrap command to plot it for yesterday, day before yesterday and last week. The problem is I only want it for a... See more...
Hi, I am trying to plat a graph of response time over a period of time. I am using timewrap command to plot it for yesterday, day before yesterday and last week. The problem is I only want it for a certain period of time on the day. For Example between 12:00 PM to 10:00 PM (peak hours). I am snapping the time in the search itself like this earliest=-7d@d+3h latest=@d but is not working. Please see the graph - on the x-axis it is still plotting from 12:00 AM but what i want is from 12:00 PM. earliest=-7d@d+3h latest=@d Any help is appreciated.
Please provide an example for Arc Globe Visualisation
Since Valentine's Day is near, Splunk can search for everything. And it might find love, I thought. How?
1.While creating a splunk it is showing "Please enter a valid URL beginning with https:// " even though my URL format starts with https://
| makeresults | eval time=-62167252739 | eval _time=time | eval time_text=strftime(_time,"%c %::z") -62167252739 is my negative epoch time limitation. I don't know why. -62167219200 is ... See more...
| makeresults | eval time=-62167252739 | eval _time=time | eval time_text=strftime(_time,"%c %::z") -62167252739 is my negative epoch time limitation. I don't know why. -62167219200 is "0000/01/01 00:00:00 +0000" . the diff is 9*60*60 + 18*60 + 59 . my result is _time time time_text 0000/01/01 00:00:59 -62167252739 Sat Jan 1 00:00:00 0000 +09:18:59 my TZ=JST(+09:00). Is this problem only JST? Hopefully it will be fixed. my splunk is ver 8.0.1 on macosx 10.14.6. On Terminal , date -r -62167252739 is 0000/01/01 00:00:00 LMT As this result, This may not splunk problem, maybe.
We are using Splunk Insights for Infrastructure and its using 10gb or more a day and consuming a ton of space. I read of threads about collectd, but not sure how we are collecting the data. I expect ... See more...
We are using Splunk Insights for Infrastructure and its using 10gb or more a day and consuming a ton of space. I read of threads about collectd, but not sure how we are collecting the data. I expect we are using the default method to collect and store the data. Any insights are appreciated.
Given the program below that accepts arguments from the command-line via the string[] args parameter, how do I setup a business transaction that will split based on the first value, args[0]? I have ... See more...
Given the program below that accepts arguments from the command-line via the string[] args parameter, how do I setup a business transaction that will split based on the first value, args[0]? I have it set to use Parameter index 0 and a Getter chain value of .[0] right now, which doesn't seem to work.  For the Getter chain I have also tried string/0 and int/0 an those do not seem to work either. The Using Getter Chains documentation is not very clear about this and does not provide clear example of accessing a value in a parameter that is an array. using System; using System.Collections.Generic; using System.Linq; public class Program { public static void Main(string[] args) { switch(args[0]) { case "A": DoA(); break; case "B": DoB(); break; } } private static void DoA(){} private static void DoB(){} }
Hello, I would like to setup ongoing alert to be triggered anytime an index ingests 20gb of logs. This is to prevent a license violation due to developers turning on debug mode and leave it one res... See more...
Hello, I would like to setup ongoing alert to be triggered anytime an index ingests 20gb of logs. This is to prevent a license violation due to developers turning on debug mode and leave it one resulting in a lot of unnecessary logs after the issue is resolved. Thank you!
I am receiving the above error when trying to deploy update and new apps from the Cluster Master to the Indexer Cluster. The aspps do exist in /web/splunk/etc/managed-apps directory structure on the... See more...
I am receiving the above error when trying to deploy update and new apps from the Cluster Master to the Indexer Cluster. The aspps do exist in /web/splunk/etc/managed-apps directory structure on the cluster master, but I still get this error. I have reviewed several Answers and none of them have worked for us. Recently we worked on the indexes.conf to standardize some features, I'm wondering if that may have caused it. All we did was remove The repFactor-auto from each index (there are about 35) and put it in the [default] and also removed the maxTotalDataSizeMB from each of the index configuration and put it in the default stanza. Now when we push a bundle, we get the Failed to install app with the application does not exist. Would really appreciate any assistance you can give me.
I noticed on my splunk instance that I am getting messages like these: 02-07-2020 15:20:36.038 -0500 INFO Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=... See more...
I noticed on my splunk instance that I am getting messages like these: 02-07-2020 15:20:36.038 -0500 INFO Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=993, largest_size=993, smallest_size=993 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=2035, largest_size=2035, smallest_size=2035 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=auditqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=809, largest_size=809, smallest_size=809 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=998, largest_size=998, smallest_size=998 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=6144, current_size_kb=6143, current_size=99, largest_size=99, smallest_size=99 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=995, largest_size=995, smallest_size=995 How can I resolve this?
I would like to save the CSV file to an external location. I am using the |outputcsv command which is saving the file to a Linux but I need the file to be picked up from there and move to external ... See more...
I would like to save the CSV file to an external location. I am using the |outputcsv command which is saving the file to a Linux but I need the file to be picked up from there and move to external location such as WVDCCRVFASS\ETL\FlatFiles\Splunk. Can you please all let me know how can this be done?
I have the username filed extraction as follows in the props.conf which extracts the username:- [sourcetype_X] EXTRACT-XYZ = username="(?<user>[^+\"]*)" which extracts the field as follows ... See more...
I have the username filed extraction as follows in the props.conf which extracts the username:- [sourcetype_X] EXTRACT-XYZ = username="(?<user>[^+\"]*)" which extracts the field as follows x12345@abc-def-ghij-01.com y67891@klm-def-ghij-01.com z45787@abc-def-ghij-01.com ABC-DEF Now what would be regex stanza to extract the username as follows from the above x12345 y67891 z45787 ABC-DEF
First, let me start by saying I am not a programmer, a Splunk expert, highly experienced with Regex or SED. I say this so you understand if you offer an answer please do not leave any steps out expec... See more...
First, let me start by saying I am not a programmer, a Splunk expert, highly experienced with Regex or SED. I say this so you understand if you offer an answer please do not leave any steps out expecting I know what should fill in the blanks. I get MAC addresses in the format of 00:00:00:00:00:00 but the logs I need to search are in the format of 00-00-00-00-00-00, I'm looking for a way for Search to take the input with colons and convert the colons to dashes before executing the search so we do not have to manually change before executing our search.
Hello, I'm trying to learn (if possible) how to remove the space left by No title in panels and visualizations. I want to reclaim this real estate. When editing vs results in dashboard. ... See more...
Hello, I'm trying to learn (if possible) how to remove the space left by No title in panels and visualizations. I want to reclaim this real estate. When editing vs results in dashboard. Thanks and God bless, Genesius
Hello, I have query below and want to search by filterstring from fieldsummary values and return all values which matches filterstring from the results of query below - index =test environment=ps ... See more...
Hello, I have query below and want to search by filterstring from fieldsummary values and return all values which matches filterstring from the results of query below - index =test environment=ps sourcetype=asp_test requestbody=* requestbody="request-body" | fields requestbody | xmlkv | fieldsummary maxvals=10