All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to improve Splunk Deployment server scalability?
Hi Splunk experts, I have below usecase and using below query     index=Index1 app_name IN ("customer","contact") | rex field=msg.message.details "\"accountUuid\":\"(?<SFaccountUUID>[^\n\r\"]+)"... See more...
Hi Splunk experts, I have below usecase and using below query     index=Index1 app_name IN ("customer","contact") | rex field=msg.message.details "\"accountUuid\":\"(?<SFaccountUUID>[^\n\r\"]+)" | rex field=msg.message.details "\"contactId\":\"(?<SFcontactUUID>[^\n\r\"]+)" |rex field=msg.details "\'customerCode\'\=\'(?<cac>[^\n\r\']{10})\'" | rename msg.correlationId AS correlationId | stats latest(SFcontactUUID) as contactUUID,latest(SFaccountUUID) as accountUUID,values(msg.tag.Status) as QStatus,values(msg.tag.errorMessage) as Q_errorMessage,values(msg.tag.errorCode) as Q_errorCode by correlationId | join type=left correlationId [search index=index2 app_name="contact1" |rename msg.message.header.correlationId AS correlationId |stats values(msg.message.header.Status) AS DStatus,values(msg.message.header.eventName) AS eventName,values(msg.message.header.errorMessage) as D_errorMessage,values(msg.message.header.errorCode) as D_errorCode by correlationId] The common identifier between the 2 searches is the correlationId. Below is sample result   correlationId contactUUID accountUUID Q_errorMessage Q_errorCode D_errorCode D_errorMessage ab861125-6cd7-493b-999f-ef9b2edd8315023758601   C0DABCC1-EFC8-11eb-A67A-005056B89B42 null null 201 null null { "ContactUUID": "b020c98a-43f5-d6b3-e983-45ffddf52a73"} Is it possible to coalesce the value of highlighted in red from subsearch into the ContactUUID field in the outersearch?I am expecting this value either in outer or subsearch and so how can I solve it? 
Hi, I have an excel with 11000 records with 5 columns  (Username, Unique UserId, ** , **, **).  I need to prepare a report on users login(Loginservice call) for the past 15 days.  Have to search in... See more...
Hi, I have an excel with 11000 records with 5 columns  (Username, Unique UserId, ** , **, **).  I need to prepare a report on users login(Loginservice call) for the past 15 days.  Have to search in Splunk based on UserId and need to retrieve all the events matching with the userId's. Can someone help me with an effective solution ?              
Dear All, I want to integrate Imperva SecureSphere WAF with SPlunk. I have installed the add-on to my Search head, its shown in my SH and HF /etc/app directory.. Now please let me know how to integ... See more...
Dear All, I want to integrate Imperva SecureSphere WAF with SPlunk. I have installed the add-on to my Search head, its shown in my SH and HF /etc/app directory.. Now please let me know how to integrate imperva logs to splunk.
I'm trying to build a search that will return an event and the severity of that event. I have the events with wildcards for parts that might change and severity in a lookup. Here's an example from... See more...
I'm trying to build a search that will return an event and the severity of that event. I have the events with wildcards for parts that might change and severity in a lookup. Here's an example from my lookup Message,Severity *kernel: nfs: server * OK,normal *kernel: nfs: server * not responding* still trying,critical If I run this search I get back the results I'd like, but have no way of referencing this back to the lookup to grab severity because the Message doesn't match whats in the lookup due to the wildcards index=os source=/var/log/messages host=linuxserver1p | rex field=_raw "(?<Message>.*)" | search [inputlookup mylookup.csv | table Message] Jul 28 02:15:40 linuxserverp kernel: nfs: server fixdist OK Jul 28 01:30:37 linuxserver1p kernel: nfs: server fixdist not responding, still trying How can I take these results back to my lookup and be able to pull severity out? Here is another search I've tried where I have both the results I want and the values from the lookup and I just need to join them together somehow, but as far as I can tell the join won't work with wildcards |inputlookup mylookup.csv |rename Message as msg | append[search index=os source=/var/log/messages host=linuxserver1p | rex field=_raw "(?<Message>.*)" | search [inputlookup mylookup.csv | table Message]] |table Message msg Severity    
I am following along the Splunk docs to self sign a cert for my Splunk UI. Every thing is going fine until I get to the command  /opt/splunk/bin/splunk cmd openssl x509 -req -in mySplunkWebCert.c... See more...
I am following along the Splunk docs to self sign a cert for my Splunk UI. Every thing is going fine until I get to the command  /opt/splunk/bin/splunk cmd openssl x509 -req -in mySplunkWebCert.csr -CA myCACertificate.pem -CAkey myCAPrivateKey.key -CAcreateserial -out mySplunkWebCert.pem -days 1095 I am getting "mySplunkWebCert.csr: No such file or directory" Unfortunately I'm not familiar enough with certs to trouble shoot this. Any help appreciated.
Where do I find a new API for Splunk ES called MITRE ATTACK? The app is not working. The error I get is "Correct API key for MITRE attack". 
Just want to confirm this behavior for a scripted input: The script I want to call, actually runs perpetually... it calls an API, outputs the data, waits 120 seconds, calls the API again, wash rinse... See more...
Just want to confirm this behavior for a scripted input: The script I want to call, actually runs perpetually... it calls an API, outputs the data, waits 120 seconds, calls the API again, wash rinse repeat. What I'm wondering... can I schedule this script, say every 5minutes, and will the scheduler go "script is still running... not starting again"  ... if for whatever reason it stopped, it will just start it again? Or will it just keep starting multiple instances of the script if it's still running?  
Hello, I have a test environment and the SHC members aren't allocated the recommended resources (because it's test) however i haven't had any issues with the environment until recent. For whatever r... See more...
Hello, I have a test environment and the SHC members aren't allocated the recommended resources (because it's test) however i haven't had any issues with the environment until recent. For whatever reason in my test environment the 3 node SHC members keep getting shut down because of signal 9 (the server itself is killing the splunk process) Signal 9 is a KILL signal from an external process. The server is running out of memory, and thats the cause for the kill If i restart the SHC members the resources are freed but the spiral starts over once again. screenshot that shows the decline, something is eating away at it.     When i run the top command on the searcheads and press e to change the unit i can see it's splunk mongod that's taking up most of the mem so far. I also will have replication issue every now and again, where i have to resyc.
I am having issues with finding a way to export two reports. I have two reports, which I'll call search1 and search2. Both searches were run, then ran in the background. According to the jobs tab, b... See more...
I am having issues with finding a way to export two reports. I have two reports, which I'll call search1 and search2. Both searches were run, then ran in the background. According to the jobs tab, both searches completed. The customer wanted this search run for "all-time" and thus is quite large. Search1 is 9.22GB and Search2 is 4.97GB. The issue is getting access to the logs. I've tried using | loadjob sid, and it just hangs and fails. I've tried exporting from the jobs tab, and it fails. I can't use the api, because from what I can tell, you must put the password into the search, when then makes the password searchable for anyone with access to that log. I went to the $SPLUNK_HOME/var/run/splunk/dispatch folder and found both jobs where this link, https://docs.splunk.com/Documentation/Splunk/8.2.1/Troubleshooting/CommandlinetoolsforusewithSupport#toCsv, says to run "splunk cmd splunkd toCsv ./results.srs.gz". the .gz file appears to now be .zst, but I ran the command. Search1 after a while simply said "killed". Search2 as I'm writing this appears to be working, as it appears comma delimited text is scrolling on the console. I assume that once changed, I will be able to export this one. So how do I export Search1 and other large files in the future? The toCsv command was the last thing I found to try. Perhaps there is a setting in a .conf file I can modify and then run something else? Any assistance is appreciated.
I have a custom role which has limited capabilities, including  rest_apps_view rest_properties_get search The role needs to run the following search via the REST API and write the ouptut to a t... See more...
I have a custom role which has limited capabilities, including  rest_apps_view rest_properties_get search The role needs to run the following search via the REST API and write the ouptut to a text file on the originating server. | inputlookup xxx.csv | eval HASH=sha256(<FIELD B>+<FIELD C>) | table <FIELD A>, HASH I have created a user with the relevant role, and created a token for use in the curl request. If I run the above search in the UI it works fine, when I run the curl I get a FATAL response message - empty search. The curl I am using is: curl -k -X GET -H "Authorization: Bearer <token>" https://mysearchead.com:8089/servicesNS/<user>/<app>/search/jobs/export -d search='<my search>' -d output_mode=csv > output.csv So, my question is, which Splunk capabilities are required to be enabled for my custom role to successfully make a REST API call to the search/jobs/export endpoint?
I have an Index called myindex: NAME AGE CITY COUNTRY LEGAL AGE Denis 17 London UK NO Denis 18     YES Maria 17 Rosario Argentina NO Maria 18     YES Nani 11 ... See more...
I have an Index called myindex: NAME AGE CITY COUNTRY LEGAL AGE Denis 17 London UK NO Denis 18     YES Maria 17 Rosario Argentina NO Maria 18     YES Nani 11 Paris France NO   This is a basic example. The case is when LEGAL AGE=NO, there are several more fields available than when LEGAL AGE=YES. Notice that when LEGAL AGE=YES the field "CITY" and "COUNTRY" didn't exists at all. What I need to get are all the people of this index with all the information EVEN if they are not in LEGAL AGE. I use a join for this:   index=myindex "LEGAL AGE"=NO | join NAME [ search index=myindex "LEGAL AGE"=YES ]   The problem is that it is working only if the subsearch returns something. In this example, it will work for Denis and Maria, but not for Nani. How can I make it works even if subsearch is returning nothing?
Hi: I am testing out the new dashboard options with Dashboard Studio, and I am a bit confused as to how a feature works.  I want to use a base search 'index=nginx source="/var/log/nginx/access.log"'... See more...
Hi: I am testing out the new dashboard options with Dashboard Studio, and I am a bit confused as to how a feature works.  I want to use a base search 'index=nginx source="/var/log/nginx/access.log"', I have that setup in DataSource.  I then want to chain that to multiple modifiers.  For this end, I added a Chain search '| stats count by status', linked to the Parent Search above, I also created another chain search '| search splunk*' for some testing. If I create a dashboard panel graph (pie), and link it to the stats search, it says it can't find any data 'Search ran successfully, but no results were returned'.  If I click the magnifying glass from that, it returns results. If I have a table panel, and use the splunk search chain search, it finds results.  If I have a chained search that uses '| search site=splunk*', despite that field existing, it finds no results, but the magnifying glass does.  Can auto extracted fields not be used in this manner? The data in the source logs are all in the format <key>="value" for easy auto extraction of the fields. Thank you for any assistance/information you can provide.
Searches starting to take more time to execute and then getting deferred at 9:10 am everyday. Number of searches are same throughout the day. No extra searches running around that time.
I have a cluster consisting of some 6 or so indexers. I also have a search-head cluster consisting of 3 SH's. In webui I'm getting: The percentage of high priority searches delayed (76%) over the ... See more...
I have a cluster consisting of some 6 or so indexers. I also have a search-head cluster consisting of 3 SH's. In webui I'm getting: The percentage of high priority searches delayed (76%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=55. Total delayed Searches=42 The percentage of non high priority searches delayed (77%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=440. Total delayed Searches=339 Also, the users report problems with very slow refreshing dashboards and so on. But the splunk components themselves do not seem to be stressed that much. The machines have 64G RAM each and 24 (indexers) or 32 (search-heads) CPUs but the load is up to 10 on SH's or 12 on idxrs tops. If I do vmstat I see the prcessors mostly idling and about half of memory on search-heads is unused (even counting cache as used memory). So something is definitely wrong but I can't pinpoint the cause. What can I check? I see though that search heads are writing heavily to disks. Almost all the time. Maybe I should tweak some memory limits for SH's then to make it write to disk less? But which ones? Any hints? Of course at first it looks as if I should raise the number of concurrent searches allowed because the CPU's are idle but if storage is the bottleneck it won't help much since I'd be hitting the streaming to disk problem just with more concurrent searches.
Hi All, I have to show specific strings in my dashboard based on the metric value. So is it possible to show in metric value widget like if metric value =1 then show ABC and if metric value =2 th... See more...
Hi All, I have to show specific strings in my dashboard based on the metric value. So is it possible to show in metric value widget like if metric value =1 then show ABC and if metric value =2 then show XYZ? Basically need to add conditional output in widget-based metric value. The text or string to be shown is constant, so if metric value = 1 then it will be always ABC which needs to be shown in widget output. Regards, Gopikrishnan R.
Hi All, I have a requirement to store the Db agent custom metrics data to analytics and apply ADQL on those data to query specific output. Is it possible at all and if yes then how? Regards, Gopik... See more...
Hi All, I have a requirement to store the Db agent custom metrics data to analytics and apply ADQL on those data to query specific output. Is it possible at all and if yes then how? Regards, Gopikrishnan R.
Hi,   we have one inputlookup file X1.csv and one index=x2, we want to fetch alarm details from index for device name that is comman in inputlookup file. in Inputlookup file we have device name, L... See more...
Hi,   we have one inputlookup file X1.csv and one index=x2, we want to fetch alarm details from index for device name that is comman in inputlookup file. in Inputlookup file we have device name, Location, Category, IP and same device name we have in index=x1 so could you please help how we can fetch the alarm details and we can perform alarm time details like which time we have received alarm for devices.   Thanks and Regards,  Nikhil DUbey 4nikhildubey@gmail.com nikhil.dubey@visitor.upm.com 7897777118
hi  please suggest me how can i collect the windows event log without splunk universal forwarder
Hi all, i have a query for transaction, source="abc_data1_*" index="testing" sourcetype="_json" | transaction startswith=(STATUS="FAIL") endswith=(STATUS="SUCCESS") The events in the results are co... See more...
Hi all, i have a query for transaction, source="abc_data1_*" index="testing" sourcetype="_json" | transaction startswith=(STATUS="FAIL") endswith=(STATUS="SUCCESS") The events in the results are considered from most recent to oldest. But i want this  transaction to consider the the older data first to the processing. I want the data to be sorted from the beginning and then apply the transaction. "Reverse" doesn't work with this.Anyone knows how to do this?