All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We are using the Microsoft Cloud Services Add-On for Splunk to integrate and ingest the logs from Azure Storage Table and Azure Storage Blob and already we have ingested data from Azure Stor... See more...
Hi All, We are using the Microsoft Cloud Services Add-On for Splunk to integrate and ingest the logs from Azure Storage Table and Azure Storage Blob and already we have ingested data from Azure Storage Table and Blob into our Splunk successfully. But i have a doubt, so once the logs are getting ingested into Splunk then it would be tracked on how much GB we are ingesting into Splunk on daily basis and it will calculated in daily licensing for Splunk. But actually we are pulling the logs from Azure Storage Table and Blob so would there be any cost involved from Azure front if we pull the logs from Azure to Splunk?   And if it is yes then how much azure would be charging for the log ingestion into Splunk. If we have useful links to check on the same if then kindly share as well.
Hi, I'm new as Splunk user, I'm asking your help   I would like to create an easy dashboard with VPN datas. My search :   index="fw_paloalto" ( sourcetype="pan:globalprotect" log_subtype... See more...
Hi, I'm new as Splunk user, I'm asking your help   I would like to create an easy dashboard with VPN datas. My search :   index="fw_paloalto" ( sourcetype="pan:globalprotect" log_subtype="connected") OR (sourcetype="pan:system" log_subtype=auth signature="auth-fail")   With that datas, i would like to get values in a global timechart 1d like that :   >dc(user) WHERE log_subtype =connected + host="PA-3020*" > dc(user) WHERE log_subtype =connected + host="PA-820*" > c(user) WHERE signature="auth-fail" + host="PA-3020*" > c(user) WHERE signature="auth-fail" + host="PA-820*" For the moment, i'm not able to display that values in the same chart, i'm forced to have  1 chart per host. Hope it is clear enough, Thanks a lot for your help, Dimitri
Hello, in my dashboard, I use the "stats count" function to show the number of results for a particular search. Now I want to divide this number by 3 and display the result instead. How to do this... See more...
Hello, in my dashboard, I use the "stats count" function to show the number of results for a particular search. Now I want to divide this number by 3 and display the result instead. How to do this?   Thank you for your Help!  
Hello Community. Can you please advise me. Where in the configuration can I find out which SMTP mail server my Splunk uses to send notifications to employees? My configuration uses Search Head... See more...
Hello Community. Can you please advise me. Where in the configuration can I find out which SMTP mail server my Splunk uses to send notifications to employees? My configuration uses Search Head Cluster. I'm trying to find the configuration file where my company's SMTP is listed through which it can send alerts within our domain. I went to one of the Search Head and wanted to see the configuration at Settings/Server settings/Email settings. But there are no settings listed there. But this Search Head is sending alerts. Thank you for your feedback
Hi, I recently installed the elastic data integrator app for migrating data from elk server to splunk. After adding input option and enabling the modulator, no data is received on splunk. Is there ... See more...
Hi, I recently installed the elastic data integrator app for migrating data from elk server to splunk. After adding input option and enabling the modulator, no data is received on splunk. Is there anything additional that needs to be done while installing the modulator or there in input options. My project desires to obtain raw data from elk to splunk. I am also new to splunk. Any help will be much appreciated. I am also attaching screenshots depicting my problem. Thanking You Hari
Hi all, I am calculating a value from data and i want to plot it in a timechart.   | where status!="ABORTED" | streamstats count as start reset_on_change=true by status URL | where start=1 | strea... See more...
Hi all, I am calculating a value from data and i want to plot it in a timechart.   | where status!="ABORTED" | streamstats count as start reset_on_change=true by status URL | where start=1 | streamstats count(eval(status=="FAILURE")) as fails by status URL | eval fails=if(fails=0,null(),fails) | filldown fails | stats list(*) as * by fails URL| where mvcount(status) = 2| eval stime=mvindex(TIME, 0) | eval etime=mvindex(TIME,-1) | eval diff=(etime - stime)/3600/1000|timechart span=1mon avg(diff) as MTTR by URL|eval MTTR = round(MTTR,2)   I tried to plot timechart like this but it is not working and it is giving no results found. Is there anything needs to be done to plot a calculated value in a timechart?
I did whitelisted some hosts in DS but that servers are not deployed,  So I check internal logs for splunkd, I am seeing below error,  -0400 ERROR HttpListener - Handler for /services/streams/d... See more...
I did whitelisted some hosts in DS but that servers are not deployed,  So I check internal logs for splunkd, I am seeing below error,  -0400 ERROR HttpListener - Handler for /services/streams/deployment?name=default:fli_events_prod_12963_d209_corp_dist:fli_events_prod_12963_d209_corp_dist sent a 300 byte response after earlier claiming a Content-Length of 10240! Please help me how to fix it
Hi All, We have used azure certificate for single sign-on for our application. It was working fine until we updated certificate as its validity was over. Everything is same as before. We have not ma... See more...
Hi All, We have used azure certificate for single sign-on for our application. It was working fine until we updated certificate as its validity was over. Everything is same as before. We have not made any changes in any conf files... The path of idp cert is "\etc\auth\idpCerts". The authentication file is in \etc\system\local. Authentication file ->  [saml] idpCertPath = idpCert.pem I am not understanding what is causing this issue? can anybody please help me in this
Splunk memory is used over 95%, and the max usage process is named [Splunk server] -- over 60%. And the High Memory Usage is still keeping High-memory-usage from Aug 29th. Also,  it seems be caused... See more...
Splunk memory is used over 95%, and the max usage process is named [Splunk server] -- over 60%. And the High Memory Usage is still keeping High-memory-usage from Aug 29th. Also,  it seems be caused the Splunk server down. After I restarted it, the memory usage was coming to 95% soon. Also, I see the Q&A following, and I think it's splunkd problem after researching. So I need your help to  solve splunkd problem. URL:https://docs.splunk.com/Documentation/Splunk/6.5.7/Troubleshooting/Troubleshootmemoryusage Thanks a lot. Country/Location: China/Mainland Version Number: Splunk Enterprise Server 6.6.7
Hi Im trying to change the color of a line chart with: <option name="charting.seriesColors">[000000FF]</option>  but the color remains with the default red color
I want to parse local log files and add the date to the body of the post request, but not exactly certain what is the best date form at to use?  Can someone please provide some example options? Than... See more...
I want to parse local log files and add the date to the body of the post request, but not exactly certain what is the best date form at to use?  Can someone please provide some example options? Thank You, Mark $params = @{ Uri = 'https://prd-p.splunkcloud.com:8088/services/collector' Method = 'POST' Headers = @{ Authorization = 'Splunk 2caf8cde' } Body = @{ index = 'job1' sourcetype = '_json' event = @{ name1 = "value1" name2 = "value2" array1 = @( "value1" "value2" ) } } | ConvertTo-Json } Invoke-RestMethod -SkipCertificateCheck @params
Hello Splunkers Below is the screenshot in which i have created one hidden panel in which level_department is the set token I have created. The token $level1$ refererd here which brings the departm... See more...
Hello Splunkers Below is the screenshot in which i have created one hidden panel in which level_department is the set token I have created. The token $level1$ refererd here which brings the department level related information that all is working fine.  In the below panel after $cat$ when i used to refer $level1_department$ ( the above panel token). That is not working fine. Can someone help me what is the issue in this and what things I need to correct here to refer token $level1_department$ in the below query.    
Hi Everyone, I am desperately seeking help for my new query in SPLUNK. The search result will look like the below:         "pluginid","alertRef","alert","name","riskcode","confiden... See more...
Hi Everyone, I am desperately seeking help for my new query in SPLUNK. The search result will look like the below:         "pluginid","alertRef","alert","name","riskcode","confidence","riskdesc","confidencedesc","desc","instances","count","solution","otherinfo","reference","cweid","wascid","sourceid" "100001","100001","Unexpected Content-Type was returned","Unexpected Content-Type was returned","1","3","Low (High)","High","<p>A Content-Type of text/html was returned by the server.</p><p>This is not one of the types expected to be returned by an API.</p><p>Raised by the 'Alert on Unexpected Content Types' script</p>","System.Xml.XmlElement","933","","","","-1","-1","20420" "100000","100000","A Client Error response code was returned by the server","A Client Error response code was returned by the server","0","3","Informational (High)","High","<p>A response code of 401 was returned by the server.</p><p>This may indicate that the application is failing to handle unexpected input correctly.</p><p>Raised by the 'Alert on HTTP Response Code Error' script</p>","System.Xml.XmlElement","2831","","","","388","20","70"       My aim is to have a table in Splunk that can categorize each the value with the new field. For example:       pluginid alertRef alert 100001 100001 Unexpected Content-Type was returned","Unexpected Content-Type was returned 100000 100000 A Client Error response code was returned by the server       So my regex should be able to read all the new line inside the csv search result.. My current solution is not really capable (as it only read single line, not multiple lines) as you can see below (I skipped the column name) :     ^"\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+","\w+"\s+"(?P<plugin_id>\d+)","(?P<alert_ref>\d+)     Please help me to get the regex able to read all the new line in my CSV search result
I have a very noisy app log. I want to use Splunk's indexer to filter only relevant data and index them. Basically I need to match a string 'Error', only forward the matched line and the line precedi... See more...
I have a very noisy app log. I want to use Splunk's indexer to filter only relevant data and index them. Basically I need to match a string 'Error', only forward the matched line and the line preceding that one for indexing. In other words, I need to do a grep and a grep -B1 for the string Error. Then, I only want to index those events using Splunk's indexer filtering. How do I do that?   Example: I have this log data INFO: Task1 INFO: OK INFO: Task 2 ERROR: exception xyz   Here, I only want to capture and index this: INFO: Task 2 ERROR: exception xyz
Hi all! I have been absolutely stumped by this and hoping you can help me out. I am trying to find users that have 2 different, distinct events that happen on the same day. One event can occur at a... See more...
Hi all! I have been absolutely stumped by this and hoping you can help me out. I am trying to find users that have 2 different, distinct events that happen on the same day. One event can occur at any time of the day, and the second event occurs between 6-8 am. The closest I have gotten is: index=Info source=Trustme (EventCode=X OR EventCode=Y) | eval hour=tonumber(strftime(_time,"%H")) | where hour>=8 OR hour<0 | stats values(EventCode) as Event_Codes by User | search Event_Codes=X Event_Codes=Y This is clipping out users who have Event Y occur outside of that range, which I would like to avoid. Also, I want to cast this over a large period to test and make sure I'm capturing the right people, then I can hopefully set it up as an alert. Any help would be greatly appreciated!
Hi, in a Linux server, a UF is configured to monitor a log directory, and it stops sending data to the indexer after about 2 minutes. When I restart the UF from the deployment server, it will start s... See more...
Hi, in a Linux server, a UF is configured to monitor a log directory, and it stops sending data to the indexer after about 2 minutes. When I restart the UF from the deployment server, it will start sending data and then stop sending. Other inputs configuration like running scripts are working fine, and there is no error or warning in the _internal index about this host. Do you have any idea about this problem?
Hi all - I am trying to exclude matching results from a lookup and can't get it to work. I've tried multiple searches, tried what I've found in Splunk Answers, and I just can't get this to work. H... See more...
Hi all - I am trying to exclude matching results from a lookup and can't get it to work. I've tried multiple searches, tried what I've found in Splunk Answers, and I just can't get this to work. Here's what I have right now:   | inputlookup myinputlookup1 | search NOT [ |lookup my_lookup InLookField AS LookField OUTPUT InLookField]   This search runs but produces no results. What am I doing wrong? 
Hi, We are using both Splunk Cloud and Splunk Enterprise. We recently came across some issues/differences in search we originally thought were due to indexed field issues but turned out to be more ... See more...
Hi, We are using both Splunk Cloud and Splunk Enterprise. We recently came across some issues/differences in search we originally thought were due to indexed field issues but turned out to be more about some basic difference in how each environment converts a search into lispy (at least that is what we observe). For example in Splunk Cloud 8.2.2203.4 the following search:   index=_internal some_field=some-value   Results in the following lispy:   [ AND index::_internal [ OR some_field::some-value [ AND some value ] ] ]     For our Splunk Enterprise 8.2.6 the same search results in the following lispy:   [ AND index::_internal some value ]     In our case `some_field` is an index field added on by our HEC requests. This results in very incorrect searches in enterprise and inefficient searches in cloud. We do now realize we can just directly query for "some_field::some_value" but we would like to understand this behavior difference and if it is configurable.   Thanks
Hello All, I am relatively new to splunk and I am trying to search using sets. Sets here refers to a group of values that I import into splunk and then search the logs from a data source for value... See more...
Hello All, I am relatively new to splunk and I am trying to search using sets. Sets here refers to a group of values that I import into splunk and then search the logs from a data source for values that match any of the values in the set. Something like a reference set in Qradar. The usecase I am trying to implement is an alert for blacklisted applications. I have a .csv file that contains two columns, application name & application category. I want to import this data into Splunk and then use the values in the application name column to search against the processName field of the logs from the endpoint security solution.  How do I achieve this on Splunk? I have read through the documentation for lookup but I did not understand how it would help me achieve my objective.
Is it possible to extract a field across multiple indexes and multiple sourcetypes?