All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I have data that comes in as .txt format. Its dropped into a folder that's monitored by Splunk. There is a current extraction we are using to pull the headers out of the data but there has bee... See more...
Hello I have data that comes in as .txt format. Its dropped into a folder that's monitored by Splunk. There is a current extraction we are using to pull the headers out of the data but there has been a new field added to the .txt and I need to create a new extraction for the headers.  the data looks like this:   Type AppliesTo Path Snap Hard Soft Adv Used Efficiency --------------------------------------------------------------------------------------------------- directory DEFAULT /ifs/common-place No 200.00G - 190.00G 0.00 0.00 : 1 directory DEFAULT /ifs/data/capacity/T1000-CPReports No 100.00M - 99.00M 53.00 0.00 : 1 directory DEFAULT /ifs/work/departments/T1000/Cognitus No 10.00G - 9.50G 348.27M 0.42 : 1 directory DEFAULT /ifs/work/Projects/ref/staging No 200.00G - 195.00G 3.72G 0.74 : 1 directory DEFAULT /ifs/work/Projects/S4/ref/sapmnt No 200.00G - 195.00G 1.69G 0.54 : 1 directory DEFAULT /ifs/data/capacity/T1000-CPReports No 100.00M - 99.00M 16.22k 0.13 : 1 --------------------------------------------------------------------------------------------------- Total: 6   the fields can either be populated with values OR if there's no value for the field it will use the literal dash    -   It is possible for the "Path" field to contain a space. The last column/field is called "Efficiency". For example this record:   directory DEFAULT /ifs/data/capacity/T1000-CPReports No 100.00M - 99.00M 16.22k 0.13 : 1   The Efficiency is "0.13 : 1" For this record its "0.00 : 1"   directory DEFAULT /ifs/data/capacity/T1000-CPReports No 100.00M - 99.00M 53.00 0.00 : 1   Can someone help me fix the extraction or if there's a better one let me know?   REGEX = ^ *(?<Type>directory|file) +(?<AppliesTo>[^ ]+) +(?<Path>.+) +(?<Snap>[^ ]+) +(?<Hard>[^ ]+) +(?<Soft>[^ ]+) +(?<Adv>[^ ]+) +(?<Used>[^ ]+) *$   Thank you for the help
How can I hide Splunk Navigation bar of an entire app? I know about hideSplunkBar="true", this is at dashboard level. I want hide splunk bar of an entire app that i am creating. I have also tried h... See more...
How can I hide Splunk Navigation bar of an entire app? I know about hideSplunkBar="true", this is at dashboard level. I want hide splunk bar of an entire app that i am creating. I have also tried having a css file under '#splunkhome\etc\apps\<appname>\appserver\static\' with following, div[data-view="views/shared/splunkbar/Master"]{ display: none !important; } .class="view---pages-enterprise---8-0-4---2Izs_"{ display: none !important; }. This did not hide the splunk bar Please provide guidance if anybody has achieved this.
Hi all, Because we have Splunk running in multiple security environments, we have two separate indexer clusters. For some data we need to send data to either both or only one of the indexer clusters... See more...
Hi all, Because we have Splunk running in multiple security environments, we have two separate indexer clusters. For some data we need to send data to either both or only one of the indexer clusters. We do this on the HF by setting the  _TCP_ROUTING key with props.conf & transforms.conf as described in https://docs.splunk.com/Documentation/Splunk/8.0.6/Forwarding/Routeandfilterdatad.  Or by directly setting the _TCP_ROUTING with inputs.conf on the UF. In outputs.conf we configure the two different destinations as in the example below. We see that in all Splunkdoc examples that different ports are used for different destinations. Is this required for Splunk to function as intended? Or is this only best practice? When do you need to use a different port then 9997 in outputs.conf and when not? This is not clear from the documentation... Please advice, thanks!   [tcpout] defaultGroup=everythingElseGroup [tcpout:syslogGroup] server=10.1.1.197:9996, 10.1.1.198:9997 [tcpout:errorGroup] server=10.1.1.200:9999 [tcpout:everythingElseGroup] server=10.1.1.250:6666  
We are working to integrate splunk with IDAM for SSO. WE have three splunk search head cluster for three set of users  groups. On checking the splunk metadata file, IDAM engineer is telling that all ... See more...
We are working to integrate splunk with IDAM for SSO. WE have three splunk search head cluster for three set of users  groups. On checking the splunk metadata file, IDAM engineer is telling that all three environments has same entity ID and we need to change it to differentiate between the clusters. Please let me know how to do it.
Usually I find an individual alert, i.e., a saved search, among a large number of alerts by searching for it by name. How can I find the individual alert that generates a known, specific alarm-ID, e... See more...
Usually I find an individual alert, i.e., a saved search, among a large number of alerts by searching for it by name. How can I find the individual alert that generates a known, specific alarm-ID, e.g. "file error 12345"? More generally, how does one find an alert, among a large number of alerts, based on the contents of the events it generates? Is there a way to find all alerts that generate alarm IDs containing a text, i.e. where the text is a substring of the complete alarm IDs. For example, all alerts that generate alarm IDs containing "file error"?
Hi, We have been using below .js and .css file to create kind of feedback form in Splunk dashboard. Once feedback is submitted by user it get stored in feedback.csv . I need to add below two functio... See more...
Hi, We have been using below .js and .css file to create kind of feedback form in Splunk dashboard. Once feedback is submitted by user it get stored in feedback.csv . I need to add below two functionalities in feedback form.  1. To add stars which has to be filled by user to record his satisfaction level. 2.  when Send button is clicked after filling feedback, username should also get stored in feedback.csv.   Below is .js and .css file --------- java script file---------- require(['splunkjs/mvc','splunkjs/mvc/searchmanager','splunkjs/mvc/simplexml/ready!'],function(mvc, SearchManager){ var updateCSV = new SearchManager({ id: "updateCSV", autostart: false, cache:false, search : "| makeresults | eval feedback=\"$setMsg$\" | inputlookup append=true feedback.csv | outputlookup feedback.csv" },{tokens: true}); $(document).find('.dashboard-body').append('<button id="feedback" class="btn btn-primary">Feedback</button>'); $(document).find('.dashboard-body').append('<div class="chat-popup" id="myForm"><form class="form-container"><h1>Feedback</h1><label for="msg"><b>Message</b></label><textarea placeholder="Type Feedback.." name="msg" id="msgFeedback" required></textarea><span id="validationFeebback"></span><button id="sbmtFeedback" type="button" class="btn">Send</button><button id="cnclFeebackPopUP" type="button" class="btn cancel">Close</button></form></div>'); $("#feedback").on("click", function (){ $('#msgFeedback').val(""); $(document).find("#validationFeebback").text(""); $(document).find('.chat-popup').show(); }); $("#cnclFeebackPopUP").on("click", function (){ $(document).find('.chat-popup').hide(); }); $("#sbmtFeedback").on("click", function (){ var msg=$('#msgFeedback').val(); if (msg.length<=10 || (msg.length==1 && msg==" ")){ $(document).find("#validationFeebback").text("Invalid Feedback").css({'color':'red',}); } else{ var tokens = splunkjs.mvc.Components.get("default"); tokens.set("setMsg", msg); updateCSV.startSearch(); $(document).find("#validationFeebback").text("Your feedback has been submitted..!").css({'color':'green'}); } }); });   --------------------.css file------------------- .chat-popup { display: none; position: fixed; bottom: 120px; border: 3px solid #f1f1f1; z-index: 9; margin-left: 30%; } /* Add styles to the form container */ .form-container { max-width: 300px; padding: 10px; background-color: white; min-width: 500px; } /* Full-width textarea */ .form-container textarea { width: 100%; padding: 15px; margin: 5px 0 22px 0; border: none; background: #f1f1f1; resize: none; min-height: 200px; } /* When the textarea gets focus, do something */ .form-container textarea:focus { background-color: #ddd; outline: none; } /* Set a style for the submit/send button */ .form-container .btn { background-color: #4CAF50; color: white; padding: 16px 20px; border: none; cursor: pointer; width: 100%; margin-bottom:10px; opacity: 0.8; line-height: 5px; } /* Add a red background color to the cancel button */ .form-container .cancel { background-color: red; } /* Add some hover effects to buttons */ .form-container .btn:hover, .open-button:hover { opacity: 1; } .btn-primary{ float:right; } .btn{ margin-right: 10px; margin-top: 10px; line-height: 25px; }  
I am preparing a volume report for my project. My requirement is to capture the peak hour (hour which has highest calls ) with date and time and pass the same date and time in sub search to get stati... See more...
I am preparing a volume report for my project. My requirement is to capture the peak hour (hour which has highest calls ) with date and time and pass the same date and time in sub search to get statistical data. My search should be like below (query to get the peak hour) | (sub search with stats command with duration of peak hour) I want to print peak hour and with statistical out put in a single query. Any suggestions, how to get this thing ?
Hi All, I have recently deployed Splunk TA Stream on universal forwarder to collect DNS data. Stream App is configured on heavy Forwarder. The universal forwarder is forwarding the data to indexer c... See more...
Hi All, I have recently deployed Splunk TA Stream on universal forwarder to collect DNS data. Stream App is configured on heavy Forwarder. The universal forwarder is forwarding the data to indexer cluster. The streamfwd.exe service on DNS server is consuming 1GB of memory. Is it a normal behavior of streamfwd.exe service to use memory in GB? UF host details : Windows 2012 R2 , Memory : 32 GB , 64bit Below configurations on Universal Forwarder: limits.conf     maxKbps = 4096     inputs.conf [streamfwd://streamfwd] splunk_stream_app_location = https://<HF_IP>:8000/en-us/custom/splunk_app_stream/ disabled = 0 stream_forwarder_id = sslVerifyServerCert = false  
When tried to add extra path in splunk deployment client (Wildfly logs new): # Wildfly logs [monitor:///opt/applications/wildfly/standalone/log/server.log] sourcetype = jboss_log disabled = false... See more...
When tried to add extra path in splunk deployment client (Wildfly logs new): # Wildfly logs [monitor:///opt/applications/wildfly/standalone/log/server.log] sourcetype = jboss_log disabled = false followTail = 0 index = newvt_prod blacklist = .*\.(old|temp|gz|bz2|zip)$ # Wildfly logs new [monitor:///opt/applications/wildfly/standalone-ext/log/server.log] sourcetype = jboss_log disabled = false followTail = 0 index = newvt_prod blacklist = .*\.(old|temp|gz|bz2|zip)$   and push it to forwarders, the index cannot retrieve any logs from the following path: /opt/applications/wildfly/standalone-ext/log/server.log only retrieves from the old path: /opt/applications/wildfly/standalone/log/server.log 1. checked permissions, they are ok r-x for splunk user 2. checked path and is ok, no mispelling 3. checked index and is growing in size, is not disabled cannot find any other issue.can someone help? BR/ CAngel
A simple search(index="xx" source="/aa/bb/cc.log") made on my searchead takes 4 minutes to display 7.5 millon events for past 4 hours. This seems to be a very slow performance. My architecture contai... See more...
A simple search(index="xx" source="/aa/bb/cc.log") made on my searchead takes 4 minutes to display 7.5 millon events for past 4 hours. This seems to be a very slow performance. My architecture contains 2 peer nodes and a master plus searchead which are dedicated machines.  More complex searches with regex takes enormous time. Where do i start troubleshooting this slowness. Does inceasing IOPS for hot db (/var/opt/splunk/db) on my peer nodes, will have a postive effect on my perfomance or any other things to check on this.
Hello community, I am currently looking into the DLTK App provided by Splunk.  I could install Docker on my Win Server as requiered. But after configuring the DLTK App (Adding Docker Host) i get... See more...
Hello community, I am currently looking into the DLTK App provided by Splunk.  I could install Docker on my Win Server as requiered. But after configuring the DLTK App (Adding Docker Host) i get the following error: Traceback (most recent call last): File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\urllib3\connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\urllib3\util\connection.py", line 84, in create_connection raise err File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\urllib3\util\connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it During handling of the above exception, another exception occurred: Traceback (most recent call last):   Thanks   
Dear all, I upgraded universal forwarder from 7.2.0 to 8.0.2 on AIX. When i start it, i have encountered the below problem:   I have tried to run slibclean and re run upgrade again. but it st... See more...
Dear all, I upgraded universal forwarder from 7.2.0 to 8.0.2 on AIX. When i start it, i have encountered the below problem:   I have tried to run slibclean and re run upgrade again. but it still failed.    also, i have try to run 'splunk validate files' .
Hi splunkers, After several days to be block with an issue regarding lookup, I try to have a little help here, Here is my problem, I have an asset which brings me alerts, sometime the same alert so... See more...
Hi splunkers, After several days to be block with an issue regarding lookup, I try to have a little help here, Here is my problem, I have an asset which brings me alerts, sometime the same alert so I want to exclude the duplicates ones, for this, I create a lookup that save the research of my alert's query. This part work great, and after I want to exclude the event that is not matching ALL the field of the lookup. So my lookup is like this : duplicate.csv subject,source,dest,malware null,null,null,null The field of my search event has the SAME name that my lookup field. Here is my query at the moment index=xxxxxx | search NOT [| lookup duplicate.csv subject AS source,dest AS dest,malware AS malware | outputlookup append=true duplicate.csv I don't know how to create the link between search field and lookup field because they share the same name. And I don't now how I do to display the event ONLY if they match all the field in my lookup (4) Thank for your help. This is a lovely community    
I have a dashboard created in separate app, in its structure it has multiple tabs, there are around 100 panels in the dashboard, whenever we open that app for the first dashboard takes too much time ... See more...
I have a dashboard created in separate app, in its structure it has multiple tabs, there are around 100 panels in the dashboard, whenever we open that app for the first dashboard takes too much time to open it , however after loading panel queries are working at normal speed. Any suggestion to make it load faster when user open it for the first time. ???
I have a table like below. Which plots different services under one column Service A (Subservices - A1 to A5) / Service B (Subservices - B1 to B5) .  I need to take a new column denotes one Final Sta... See more...
I have a table like below. Which plots different services under one column Service A (Subservices - A1 to A5) / Service B (Subservices - B1 to B5) .  I need to take a new column denotes one Final Status like this if any of one Status is RED then the final status is RED, If there is no RED but one YELLOW And many GREEN then final status if YELLOW. What will be the best condition i can use to achieve the final one result Service Status A1 GREEN A2 RED A3 YELLOW A4  GREEN A5 GREEN
hello,   JAVA_HOME path not exist in splunk DB2 connect  i have done    [java] javaHome = C:\Program Files\Java\jre1.8.0_26 JAVA_HOME = C:\Program Files\Java\jre1.8.0_26      In splunk dbx_s... See more...
hello,   JAVA_HOME path not exist in splunk DB2 connect  i have done    [java] javaHome = C:\Program Files\Java\jre1.8.0_26 JAVA_HOME = C:\Program Files\Java\jre1.8.0_26      In splunk dbx_settings    In environment variable     JAVA_HOME  C:\Program Files\Java\jre1.8.0_26 javaHome  C:\Program Files\Java\jre1.8.0_26         In splunk DB2 connect  JRE Installation Path(JAVA_HOME) in splunk DB 2 , i am putting  C:\Program Files\Java\jre1.8.0_26   i have also tried   $ JAVA_HOME ,  $JAVA_HOME$, % JAVA_HOME%, but no luck
Hi, This is my first question in Splunk community. Could anyone please guide me with proper steps to remove indexes from Splunk cluster environment Plus have to remove all dashboard, reports , sou... See more...
Hi, This is my first question in Splunk community. Could anyone please guide me with proper steps to remove indexes from Splunk cluster environment Plus have to remove all dashboard, reports , source type renaming , all storage of Indexes and etc.    Thanks  Sam  
Hi Everyone, Below are my logs : 2020-10-05T09:12:25.457507609Z app_name=abc environment=e2 ns=c2_container=arc-api pod_name=deployment-51-csl4p message=2020-10-05 02:12:25.456 INFO [arc-service,3b... See more...
Hi Everyone, Below are my logs : 2020-10-05T09:12:25.457507609Z app_name=abc environment=e2 ns=c2_container=arc-api pod_name=deployment-51-csl4p message=2020-10-05 02:12:25.456 INFO [arc-service,3b5bbd7422319fde,3b5bbd7422319fde,true] 1 --- [or-http-epoll-4] c.a.b.arc.controller.ARCFileController : Invoked:PULL_GRS_FILE_UPLOAD 2020-10-05T09:12:25.457507609Z app_name=abc environment=e2 ns=c2_container=arc-api pod_name=deployment-51-csl4p message=2020-10-05 02:12:25.456 INFO [arc-service,3b5bbd7422319fde,3b5bbd7422319fde,true] 1 --- [or-http-epoll-4] c.a.b.arc.controller.ARCFileController : Invoked:DOWNLOAD_S3 2020-10-05T09:12:25.457507609Z app_name=abc environment=e2 ns=c2_container=arc-api pod_name=deployment-51-csl4p message=2020-10-05 02:12:25.456 INFO [arc-service,3b5bbd7422319fde,3b5bbd7422319fde,true] 1 --- [or-http-epoll-4] c.a.b.arc.controller.ARCFileController : Invoked:UPLOAD_S3 In the end of each logs I have some pattern PULL_GRS_FILE_UPLOAD,DOWNLOAD_S3 and UPLOAD_S3 .  I want to display these patterns with their counts. Can someone guide me with the search query for this. As of now I am seeing the events by using below search: index=abc ns=xyz app_name=ok "Invoked:DOWNLOAD_S3"
how to send syslog events from prisma cloud to splunk enterprise
Hi, We are having splunk multisite cluster environment (site1_Master, site2). Due to frequent datacenter failure issues we are planning to migrate the servers of our Master site to other datacenter.... See more...
Hi, We are having splunk multisite cluster environment (site1_Master, site2). Due to frequent datacenter failure issues we are planning to migrate the servers of our Master site to other datacenter. I would like to know what is best ways to migrate without or less downtime. And also order of moving the components to new datacenter. We have License master, Clustermaster, Deployment server, Deployer, Search heads in cluster and indexer in cluster in this Master site datacenter. We also have Splunk ITSI DA app running.   Thanks in advance