All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try this.  Make a backup of $SPLUNK_HOME/etc/passwd.  Edit the original file and remove all users except admin and restart Splunk.
It is the admin acc that I am trying to log in with and the issue is still persisting. 
You should be able to log in using the admin account.
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow t... See more...
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow the instructions in the message to enable server name checking at connection time.  That, however, requires that each server have a unique certificate containing the server's name.
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"... See more...
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"> <eval token="form.Tail">if($click.name2$ == $form.Tail$, "*", $click.name2$)</eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="$click.value2$>0"> <link> <param name="target">_blank</param> <param name="search">index=myindex | search "WARNNING: "</param> </link> </condition> </drilldown>
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! ... See more...
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! Anyway, if you want also an eMail, at every Correlation Search you enabled, you can add as additional Action sending an eMail, in this way the SOC Analysts will receive an eMail every time a Notable is written. You can do this editing the Actions of every Correlation Search. Beware because maybe in this way you are creating a spammer! Ciao. Giuseppe
index=ABC source=XYZ 'ABC00000000001' | fillnull value="SENDING" type | stats values(type) as types by display Using above query,  getting wrong output 1) 'Sending Type is coming with All event ... See more...
index=ABC source=XYZ 'ABC00000000001' | fillnull value="SENDING" type | stats values(type) as types by display Using above query,  getting wrong output 1) 'Sending Type is coming with All event event if there is not sending event for that ID 2)  For the Ids which have 'sending' event 2 times in logs it should print twice in output.  3) Sample log, can we get this time from log event also in output.  [21.12.2024 00:33.37] [] [] [INFO ] [] - Updating DB record with displayId=ABC00000000001; type=RANSFER ID Type ABC0000001; TRANSFER SENDING ABC0000002 TRANSFER SENDING ABC0000003 POSTING TRANSFER SENDING MESSAGES ABC0000004 POSTING TRANSFER SENDING MESSAGES ABC0000005 TRANSFER SENDING
The Splunk Add-on for Google Cloud Platform is using the httplib2 library.  What worked for us was to set the HTTPLIB2_CA_CERTS environment variable in the Splunk systemd unit file and point it to t... See more...
The Splunk Add-on for Google Cloud Platform is using the httplib2 library.  What worked for us was to set the HTTPLIB2_CA_CERTS environment variable in the Splunk systemd unit file and point it to the system CA bundle (in our case /etc/ssl/ca-bundle.pem).  Have a look at 'lib/httplib2/certs.py' to understand the logic and alternative solutions.   
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 e... See more...
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 etc..) and when inside the chart a column is clicked a costume search will be opened (in a new window if possible if not same window will be just fine). based of checking if click.name is a number (and it's should be as it should be the name of the source /mnt/support_engineering... )   this is my current chart: <chart> <title>amount of warning per file per Tail</title> <search base="basesearch"> <query>|search | search "WARNNING: " | rex field=_raw "WARNNING: (?&lt;warnning&gt;(?s:.*?))(?=\n\d{5}|$)" | search warnning IN $warning_type$ | search $project$ | search $platform$ | chart count over source by Tail</query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> </drilldown> </chart> The main problem is that whenever i even try condition inside the drilldown a new search is opened instead managing tokens no matter what the condition or what im doing inside. This is what I've tried so far: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="tonumber($click.name$) != $click.name$"> <eval token="form.Tail"> if($click.name2$ == $form.Tail$, "*", $click.name2$) </eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="tonumber($click.name$) == $click.name$"> <link> <param name="target">_blank</param> <param name="search"> index=myindex | search "WARNNING: " </param> </link> </condition> </drilldown> click.name should be the name of the source as those are the columns of my chart   thanks in advanced to helpers
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  ... See more...
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  I think this is because of a new license I recently uploaded to this box. As the old license was due to expire I recently got a new free Splunk license (10GB Splunk Developer License). I received/uploaded it to the test box on Friday, 3 days before the old one was due to expire. I then deleted the old license that day despite it having a few additional days. On Sunday (the day the old license was due to expire), I started getting this login issue. As I can't get past the login screen I can't try and reupload a different license, etc.  Any suggestions? 
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified w... See more...
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified without having the Incident Review page open and improve response time. I know I can set emails individually when a alert triggers, but this would be for every 'new' alert (there are some alerts that are autoclosing) that comes in or with an option to only target high urgency alerts based on volume.  Any advice would be appreciated!  
Thanks! That is understandable. Based on your answers so far, I will think through what would work best, and will get back to you. But either way, I think I got all the answers I needed.
As I said before - while fiddling with multiple different intermediate syslog solutions can work if syslog is the native form of sending events, mixing it with windows usually ends badly one way or a... See more...
As I said before - while fiddling with multiple different intermediate syslog solutions can work if syslog is the native form of sending events, mixing it with windows usually ends badly one way or another. You either get your data broken and non-parseable or have to bend over backwards to force it to work at least half-decently. We don't know your whole environment and what is the original problem you're trying to solve so can't help much here. There is of course additional question whether you need the splunk metadata at all if you're forwarding to a third party syslog receiver anyway.
First, map is usually not the solution to the problem you are trying to solve. Secondly, could you explain the relationship between values "a", "b" and the searches "index=_internal | head 1 | eval ... See more...
First, map is usually not the solution to the problem you are trying to solve. Secondly, could you explain the relationship between values "a", "b" and the searches "index=_internal | head 1 | eval val=\"Query for a1\" | table val" and "index=_internal | head 1 | eval val=\"Query for b\" | table val"? Confusingly, everyone of the three searches will result in a predetermined string value of a single field.  Why bother with index=_internal?  If you are just trying to make a point of map, you can compose them with makeresults just as easily. If you really want to use map, study the syntax and examples in map.  The whole idea of map is to NOT use case function.  To produce the result you intended, here is a proper construct:   | makeresults | eval name1 = mvappend("c", "b", "a") | mvexpand name1 | map search="search index=_internal | head 1 | eval val=if(\"$name1$\" IN (\"a\", \"b\"), \"Query for $name1$\", \"Default query\") | table val"   This is the output no matter what data you have in _internal. val Default query Query for b Query for a However, there are often much easier and better ways to do this.  To illustrate, forget val="Query for a".  Let's pick more realistic mock values "info", "warn".  This is a construct using map.   | makeresults | eval searchterm = mvappend("info", "warn", "nosuchterm") | mvexpand searchterm | map search="search index=_internal log_level=\"$searchterm$\" | stats count by log_level | eval val=if(\"$searchterm$\" IN (\"info\", \"warn\"), \"Query for $searchterm$\", \"Default query\")"   If you examine _internal events, you will know that, even though searchterm is given three values, the above should only give two rows, like log_level count val INFO 500931 Query for info WARN 17262 Query for warn However, the syntax of map makes the search much harder to maintain.  Here is an alternative using subsearch. (There are other alternatives based on actual search term and data characteristics.)   index=_internal [makeresults | eval searchterm = mvappend("info", "warn", "nosuchterm") | fields searchterm | rename searchterm as log_level] | stats count by log_level | eval val = if(log_level IN ("INFO", "WARN"), "Query for " . log_level, "Default query")   If you apply this to the exact same time interval, it will give you exactly the same output. Hope this helps.
I set the following in inputs.conf and seems it is working fine now. multiline_event_extra_waittime = true time_before_close = 120 I will monitor it for a while and see if the successful ... See more...
I set the following in inputs.conf and seems it is working fine now. multiline_event_extra_waittime = true time_before_close = 120 I will monitor it for a while and see if the successful event breaking is stable. Thank you for your help!
Thanks for clarifying. You helped a lot! That means there are two options for me: do this conversion on the syslog-ng side and that won't hurt the splunk side of things forward the logs to yet an... See more...
Thanks for clarifying. You helped a lot! That means there are two options for me: do this conversion on the syslog-ng side and that won't hurt the splunk side of things forward the logs to yet another splunk instance that will only do this conversion, thereby isolating the "production" Splunk instance from these transforms
Hi @manhdt , Splunk ITSI is a Premium App, so you cannot download it from Splunkbase. You can have it only if your paid a license, in this case you'll receive an email with the link to download it,... See more...
Hi @manhdt , Splunk ITSI is a Premium App, so you cannot download it from Splunkbase. You can have it only if your paid a license, in this case you'll receive an email with the link to download it, or if you are a Splunk Partner, using your Not For Resale License. If you want to see it, you can access to Splunk Show, if you are enabled (it's required a Sales Engineer 2 certification) or to a video on the Splunk YouTube channel. Ciao. Giuseppe
It depends on your overall process but as a general rule, the pipeline works like this: input -> transforms -> output(s) So if you modify an event and its metadata it will get to outputs that way. ... See more...
It depends on your overall process but as a general rule, the pipeline works like this: input -> transforms -> output(s) So if you modify an event and its metadata it will get to outputs that way. There is an ugly way to avoid it - use CLONE_SOURCETYPE to make a copy of your event and process it independently but it's both a performance hit and a maintenance nightmare in the future.
How to download add-on "Splunk IT Service Intelligence" I'm receiving an error, as seen in the picture below.
Do you mean that separate lines are written with 10 minute intervals or every 10 minutes a whole multiline event is written? Anyway, if it's a UF it might help to add EVENT_BREAKER_ENABLE=true and se... See more...
Do you mean that separate lines are written with 10 minute intervals or every 10 minutes a whole multiline event is written? Anyway, if it's a UF it might help to add EVENT_BREAKER_ENABLE=true and set EVENT_BREAKER to the same value as LINE_BREAKER.