All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We have remote linux servers with universal forwarders already installed by root, we are not admin of these servers and we don't even have a user for us to connect on these servers either. We... See more...
Hello, We have remote linux servers with universal forwarders already installed by root, we are not admin of these servers and we don't even have a user for us to connect on these servers either. We would like to be able to access the servers as basic users, for instance with a "splunk" user, manage the forwarder configuration and use the splunk command lines (splunk start/restart/btool etc...) with this user, without requiring admin rights. How can we achieve that ? I guess we first need the "splunk" user to be created on the servers with a password so that we can at least access the machines. Then do we just need to change the boot-start configuration and the owner of the forwarder file system ? /opt/splunk/splunkforwarder/bin/splunk enable boot-start -user splunk chown -R splunk /opt/splunk/splunkforwarder  And now we should be able to manipulate the forwarder as we want without admin rights, right ? We wish we had a spare linux server to test this on but sadly we don't have any means to do this... Thank you for your help !
Hi Everyone,   How can we remove the form inputs in the Dashboard URL.   ?form.messageInQGrater=Enter%20Message%20Queue%20Value&form.filterQuerystatus=Success&form.filterQueryMT=environment!%3D"*... See more...
Hi Everyone,   How can we remove the form inputs in the Dashboard URL.   ?form.messageInQGrater=Enter%20Message%20Queue%20Value&form.filterQuerystatus=Success&form.filterQueryMT=environment!%3D"*prod*"&form.environment1=non_prod&form.filterQuery_BTS=environment%3D"*prod*"&form.time.earliest=-30m%40m&form.time.latest=now&form.searchfield=Transaction%20ID&form.tableSearchField=Application&form.time1.earliest=-1h%40h&form.time1.latest=now&form.filterQuery_app=environment%3D"*prod*"&form.filterQuery_error=environment%3D"*prod*"&earliest=0&latest=&form.businessGroupMT=*&form.busines_groupQ=*&form.region_nameQ=*&form.appNameMT=*&form.businessGroupTS=*&form.interfaceNameMT=*&form.searchstring=Enter%20Transaction%20ID&form.tableSearchString=&form.environment_nameQ=*&form.busines_group_app=*&form.environment_name_app=*&form.busines_group_error=*&form.application_name_error=*&form.appVersion_app=*&form.category_error=SC1&form.queue_name_dd=*&form.criticality_app=*&form.status_app=*   Thanks & Regards, Manikanth  
Hi, I'm trying to setup a DNS lookup following the instructions her:   https://docs.splunk.com/Documentation/Splunk/8.0.4/Knowledge/Configureexternallookups#External_looku...   But there is no e... See more...
Hi, I'm trying to setup a DNS lookup following the instructions her:   https://docs.splunk.com/Documentation/Splunk/8.0.4/Knowledge/Configureexternallookups#External_looku...   But there is no external_lookup.py in the $SPLUNK_HOME/etc/system/bin/ Is there a chance to get the external_lookup.py anywhere else? I´m running Splunk Enterprise 8.0.4 on an SLES 12
Search 1 : index=index_123 (msg="*xyz*") | rex field=msg "results\":{\"(?<abc1>.*)\" *" | stats values(_time) as abc1_time, abc1 Search 1 : index=index_123 (msg="*mnop*") | rex field=msg "results\":... See more...
Search 1 : index=index_123 (msg="*xyz*") | rex field=msg "results\":{\"(?<abc1>.*)\" *" | stats values(_time) as abc1_time, abc1 Search 1 : index=index_123 (msg="*mnop*") | rex field=msg "results\":{\"(?<abc2>.*)\" *" | stats values(_time) as abc2_time, abc_2 Scenario 1 : Search 1 gave results : 11:30AM --- 123,  11:40AM --- 345 Search 2 gave results :11:34AM --- 123 I want to subtract ( search 1 - search 2) with time difference less than 3 minutes. so that i should get 123(time greater then 3 mins), 345(No record in search 2).
When searching against a datamodel, a common search syntax is  | tstats min(_time) as earliest from datamodel=.....   Is the "_time" in datamodel the same as the "_time" in index, or it's the time... See more...
When searching against a datamodel, a common search syntax is  | tstats min(_time) as earliest from datamodel=.....   Is the "_time" in datamodel the same as the "_time" in index, or it's the time when properties get populated into the datamodel?     
I have been trying to download the Enterprise Splunk as instructed in the online training.  I have tried several times and the download complete and it put the files in Program folder but when I try ... See more...
I have been trying to download the Enterprise Splunk as instructed in the online training.  I have tried several times and the download complete and it put the files in Program folder but when I try to get Splunk to launch it tells me there is an installation error with no explanation then I get that the download was interrupted.  Does anyone have an ideas.  I cannot find a number to reach a support line.   Thanks!
Hi - Need a Splunk Alert for when a Jenkins master goes offline.  Like if the host or service quit for any reason.  Not the slaves, the master. New to Splunk.  In the middle of a proof of concept us... See more...
Hi - Need a Splunk Alert for when a Jenkins master goes offline.  Like if the host or service quit for any reason.  Not the slaves, the master. New to Splunk.  In the middle of a proof of concept using trial Enterprise license.  Plus the Splunk App for Jenkins.  Collecting data from nine Jenkins masters.  Have some other alerts working with the search based on those included with the Jenkins app.
I have some SQL audit files  filename.sqlaudit that I want to import using batch. I have the configuration all done and working for test files like a .txt file, but the .sqlaudit file will not import... See more...
I have some SQL audit files  filename.sqlaudit that I want to import using batch. I have the configuration all done and working for test files like a .txt file, but the .sqlaudit file will not import. Running '.\splunk.exe list inputstatus' give me 'type = unreadable file type'. I have the Splunk Add-on for Microsoft SQL Server installed on the search head, so that should parse the file once it's imported, correct? How do I get the UF to process the file?
I recently installed the Microsoft Azure Stack Add-on for Splunk and get a white screen when I click on the app. Any ideas? I dont know where to start with this one. Thanks folks!  
This question has come up a few times, how does Splunk handle data integrity in large ES implementation. On Splunk docs, it states 'Data integrity control feature. SmartStore-enabled indexes are not ... See more...
This question has come up a few times, how does Splunk handle data integrity in large ES implementation. On Splunk docs, it states 'Data integrity control feature. SmartStore-enabled indexes are not compatible with the data integrity control feature, described in Manage data integrity in the Securing Splunk Enterprise manual.   As covered in https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/AboutSmartStore
I have following logs 2020-06-30 17:58:28,307; thread=[workflow-503]; LogLevel=INFO; class=a.b.c.getData; milestone_end; id=asddd_ddd_ddd0, key1=193514641285449, tid=60d06-26c3-4281-8600-79338c6, fl... See more...
I have following logs 2020-06-30 17:58:28,307; thread=[workflow-503]; LogLevel=INFO; class=a.b.c.getData; milestone_end; id=asddd_ddd_ddd0, key1=193514641285449, tid=60d06-26c3-4281-8600-79338c6, flow=MessageMatching, RecordsCount=0, appId=appid1; msg=PerfStat-Repository: solrQuery={“q”:”a_id:193514641285449 AND (status:TO_BE_REVIEWED AND transaction_source:(SOURCE_A) AND amount:\\-80.00) AND -(deleted:true) AND -(is_primary:false)","start":"0","route.partition":["193514641285449"],"timeAllowed":10000}; recordCount=0; elapsedTime=3;; ​   Here we have solrQuery of the form {"q":"key1:value1 AND key2:value2 AND (key3:value3)}; I want to extract only key in the format key1,key2,key3
I'm experiencing issue running python script as trigger action. I got error from urllib.request module: x = urllib.request.urlopen('https://www.google.com/') File "/opt/splunk/lib/python3.7/urllib... See more...
I'm experiencing issue running python script as trigger action. I got error from urllib.request module: x = urllib.request.urlopen('https://www.google.com/') File "/opt/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/opt/splunk/lib/python3.7/urllib/request.py", line 548, in _open 'unknown_open', req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/opt/splunk/lib/python3.7/urllib/request.py", line 1387, in unknown_open raise URLError('unknown url type: %s' % type) urllib.error.URLError: <urlopen error unknown url type: https>   If I run script from Python in OS it works. Any idea? Thanks
I have configured the inputs.conf to monitor these points:   ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time disabl... See more...
I have configured the inputs.conf to monitor these points:   ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time disabled = 0 instances = * interval = 10 mode = single object = Processor useEnglishOnly=true index=perfmon ## Logical Disk [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes; Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time disabled = 0 instances = * interval = 10 mode = single object = LogicalDisk useEnglishOnly=true index=perfmon ## Physical Disk [perfmon://PhysicalDisk] counters = Current Disk Queue Length; % Disk Time disabled = 0 instances = * interval = 10 mode = single object = PhysicalDisk useEnglishOnly=true index=perfmon ## Memory [perfmon://Memory] counters = Page Faults/sec; Available Bytes; Committed Bytes disabled = 0 interval = 10 mode = single object = Memory useEnglishOnly=true index=perfmon ## Process [perfmon://Process] counters = % Processor Time; % User Time disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly=true index=perfmon ## ProcessInformation [perfmon://ProcessorInformation] counters = % Processor Time; Processor Frequency disabled = 0 instances = * interval = 10 mode = single object = Processor Information useEnglishOnly=true index=perfmon ## System [perfmon://System] counters = File Read Operations/sec; File Write Operations/sec; File Control Operations/sec disabled = 0 instances = * interval = 10 mode = single object = System useEnglishOnly=true index=perfmon   but I am only getting from "Perfmon:Process" I don't have any error messages. I don't know why I am not receiving other data too. I am indexing two windows servers and in both cases the same thing happens to me: I only receive data from "Perfmon:Process".
Hi, I recently installed the Microsoft Azure Stack Add-on for Splunk and when I click on the app I get a white screen. Not sure where to even start with something like this. Thanks in advance. TS
So data of an index missing from a 30thmay to 20th june. I was crosschecking it through epoch time of missing period. I could files in  index/db/   What does these files mean? -rw------- 1 root roo... See more...
So data of an index missing from a 30thmay to 20th june. I was crosschecking it through epoch time of missing period. I could files in  index/db/   What does these files mean? -rw------- 1 root root     0 Jun 10 00:16 rb_1591740508_1591694190_1481_B0964DEC-509D-40A9-A451-529E6E725A5F.rbsentinel drwx------ 3 root root  4096 Jun 10 09:11 rb_1591743776_1591657655_2201_7934D4E6-9F8E-4A10-9002-279FAC932938 But still data isn't reflecting on search. Is my troubleshooting wrong for missing indexes. Any recommended troubleshooting?    Thanks.
hello Team,   We have alert which finds the string 'is now DOWN'  and then send the alert but we realized that after few seconds to few mins the node is coming up, so we want to get the alert if th... See more...
hello Team,   We have alert which finds the string 'is now DOWN'  and then send the alert but we realized that after few seconds to few mins the node is coming up, so we want to get the alert if the node doesn't come up after 5 mins, can any one help us? We want to get alert with below flow   search string 'is now DOWN' condition: and check string '10.83.29.240 is now UP' about next 5 mins before sending alert   Logprints: INFO [GossipTasks:1] 2020-06-30 01:42:40,115 Gossiper.java:1041 - InetAddress /10.83.29.240 is now DOWN INFO [SharedPool-Worker-4] 2020-06-30 01:42:51,401 Gossiper.java:1026 - InetAddress /10.83.29.240 is now UP   As you see after 2 seconds that node came up.   Thanks Chandra
Hi Team, I want to effectively monitor a system with 100+ URI. So far, approach was to monitor server error by tracking 500 status codes or 5XX status codes (https status codes along with URI gets p... See more...
Hi Team, I want to effectively monitor a system with 100+ URI. So far, approach was to monitor server error by tracking 500 status codes or 5XX status codes (https status codes along with URI gets printed in splunk logs nicely!) Now in recent past, saw problem related with - 1. issue came up with 401 or 403 status codes, it may seem too easy just to add 4xx status codes to be monitored but with so many URI it's tedious. 2. Some URI, traffic even didn't generated, so no question of it coming up in 5XX monitoring as traffic itself was 0. I know a possible solution is to use a lookup file and fillnull URI but in my approach of 5XX monitoring , I didn't use lookup file. I'm doing a blanket search in all logs and then doing stats by URI and throwing alert if 5XX count percentage is more than 20%. The reason for not using URI lookup file is these URIs keep changing every week and I wanted a robust solution which would work without manual update. So please suggest a way to effectively monitor this situation. I wanted to know if there are any specific command(like anomaly or something similar) that I can look deep into, which might help.
I'm trying to fetch windows perfmon data using below input stanza but not receiving any data and I could see the error in splunk messages. input.conf [perfmon://LogicalDisk] counters = Free Me... See more...
I'm trying to fetch windows perfmon data using below input stanza but not receiving any data and I could see the error in splunk messages. input.conf [perfmon://LogicalDisk] counters = Free Megabytes;% Free Space instances = * interval = 30 mode = single object = LogicalDisk index = em_metrics _meta = os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host useEnglishOnly = true sourcetype = PerfmonMetrics:LogicalDisk disabled = 0   Error Message The metric value=<unset> is not valid for source=LogicalDisk, sourcetype=LogicalDisk, host=******, index=em_metrics. Metric event data with an invalid metric value cannot be indexed. Ensure the input metric data is not malformed. The metric name is missing for source=LogicalDisk, sourcetype=LogicalDisk, host=********, index=em_metrics. Metric event data without a metric name is invalid and cannot be indexed. Ensure the input metric data is not malformed.   Splunk Enterprise is 8.0.3 Forwarders tried are 7.3.6, 8.0.3, 8.0.4     Can anyone help here please?
Trying to make search more efficient.  Any tips? Would multi search work more efficiently? index=<myindex> sourcetype="cisco_cdr" duration > 0 earliest=-1d@d latest=@d | rename callingPartyUnicod... See more...
Trying to make search more efficient.  Any tips? Would multi search work more efficiently? index=<myindex> sourcetype="cisco_cdr" duration > 0 earliest=-1d@d latest=@d | rename callingPartyUnicodeLoginUserID AS SID | where match(SID, "[A-Za-z]\d{6}") | eval Calls_Made_cob_date = strftime(_time, "%Y-%m-%d") ,Day=strftime(_time,"%a"),Date=strftime(_time, "%Y-%m-%d") | stats count as Calls_Made sum(duration) as Calls_Made_Duration by Date Day SID | append [| search index=<myindex> sourcetype="cisco_cdr" duration > 0 earliest=-1d@d latest=@d | rename finalCalledPartyUnicodeLoginUserID AS SID | where match(SID, "[A-Za-z]\d{6}") | eval Calls_Received_cob_date = strftime(_time, "%Y-%m-%d") ,Day=strftime(_time,"%a"),Date=strftime(_time, "%Y-%m-%d") | stats count as Calls_Received sum(duration) as Calls_Received_Duration by Date Day SID ] | stats values(*) as * by Date Day SID | fillnull value="0" | eval Total_Duration=(Calls_Made_Duration+Calls_Received_Duration) | where Total_Duration > 0 | table SID Date Day Calls_Made Calls_Made_Duration Calls_Received Calls_Received_Duration Total_Duration
Hi All, We just upgraded to ITSI 4.4.4 and we noticed that banner talking about beta glass tables. Please see example below. I was wondering if there is a way to permanently disable this notifi... See more...
Hi All, We just upgraded to ITSI 4.4.4 and we noticed that banner talking about beta glass tables. Please see example below. I was wondering if there is a way to permanently disable this notifications. I tried clicking the "x" but the notice comes back after a refresh of the page. Thanks in advance.