All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have you already solved this issue? I also want to do the same, but I encountered the following problem: Active forwards:     None Configured but inactive forwards:     mysubdomain:443
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced to allow the  installation of the splunk-operator and the creation of a standalone Splunk Enterprise instance? Thanks, Mark      
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you c... See more...
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you can't login at all, there are a couple of things you can try. Either, update the license from the CLI or temporarily remove all other users from the instance. This Splunk doc has information about adding a license from the CLI: https://docs.splunk.com/Documentation/Splunk/latest/Admin/LicenserCLIcommands Once you update the license, try restarting Splunk and attempting to login again. If you are still having issues, try this to temporarily remove all other users from the instance: 1. Stop Splunk. 2. Go to `$SPLUNK_HOME/etc/passwd` and make a backup of this file (Ex. `cp passwd passwd.bak`). 3. Edit the `passwd` file and remove all users except for the `admin` user. There should only be 1 line in the file when you are done. 4. Restart Splunk. This should let you login as the `admin` user, since the other users are removed. Once you login and fix the license, you can restore the `passwd` file from the backup to add the users back. Hope this helps!
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume ... See more...
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume it's `uberagent`, you can run the following search to get some more info about the origin of the data (just replace `uberagent` with the correct sourcetype): ``` index=_internal sourcetype=splunkd component=Metrics group=per_sourcetype_thruput series="uberagent" ``` In the results that return, you should be able to see all of the hosts that have processed data for this sourcetype. Depending on your environment, you may see multiple hosts in the `host` field, but you should be able to determine which hosts are intermediate steps (like a Heavy Forwarder or Indexer) and which hosts are the original source.   From there, you can investigate the hosts `inputs.conf` to see if there are any hints as to where the data is coming from. Sometimes, the `source` field of the data might also indicate where the data is coming from. For example, if the `source` is a file path, it's almost certainly coming from a file monitor input. But it looks like you may have already checked this.   There is also a chance that it was data indexed in the past with future timestamps. But since you mentioned that you deleted the index, this is unlikely the case. New data needs to be indexed for it to start appearing in the `main` index now.   If none of that helps, let me know and we can try some other things. Good luck!
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few serve... See more...
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few servers/devices. But not sure where this data is coming from since we have removed the UberAgent apps from everywhere. Any suggestions where should we be looking at to find the source? There are no related HEC tokens OR scripts that is to be found. Warm Regards !
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splu... See more...
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splunk Process the data from DB Connect, it inappropriately truncates the message when it sees the '{' bracket in the document.  Are there solutions for overriding this line breaking feature?  We currently have to go into Raw to extract the information using RegEx to preserve the data and we would rather store this message in a Splunk Key Value Pair.  
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also d... See more...
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also don't have a type field? Please can you share accurate but anonymised examples of the all event types you are trying to process because doing it piecemeal is not very productive.
Try this.  Make a backup of $SPLUNK_HOME/etc/passwd.  Edit the original file and remove all users except admin and restart Splunk.
It is the admin acc that I am trying to log in with and the issue is still persisting. 
You should be able to log in using the admin account.
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow t... See more...
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow the instructions in the message to enable server name checking at connection time.  That, however, requires that each server have a unique certificate containing the server's name.
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"... See more...
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"> <eval token="form.Tail">if($click.name2$ == $form.Tail$, "*", $click.name2$)</eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="$click.value2$>0"> <link> <param name="target">_blank</param> <param name="search">index=myindex | search "WARNNING: "</param> </link> </condition> </drilldown>
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! ... See more...
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! Anyway, if you want also an eMail, at every Correlation Search you enabled, you can add as additional Action sending an eMail, in this way the SOC Analysts will receive an eMail every time a Notable is written. You can do this editing the Actions of every Correlation Search. Beware because maybe in this way you are creating a spammer! Ciao. Giuseppe
index=ABC source=XYZ 'ABC00000000001' | fillnull value="SENDING" type | stats values(type) as types by display Using above query,  getting wrong output 1) 'Sending Type is coming with All event ... See more...
index=ABC source=XYZ 'ABC00000000001' | fillnull value="SENDING" type | stats values(type) as types by display Using above query,  getting wrong output 1) 'Sending Type is coming with All event event if there is not sending event for that ID 2)  For the Ids which have 'sending' event 2 times in logs it should print twice in output.  3) Sample log, can we get this time from log event also in output.  [21.12.2024 00:33.37] [] [] [INFO ] [] - Updating DB record with displayId=ABC00000000001; type=RANSFER ID Type ABC0000001; TRANSFER SENDING ABC0000002 TRANSFER SENDING ABC0000003 POSTING TRANSFER SENDING MESSAGES ABC0000004 POSTING TRANSFER SENDING MESSAGES ABC0000005 TRANSFER SENDING
The Splunk Add-on for Google Cloud Platform is using the httplib2 library.  What worked for us was to set the HTTPLIB2_CA_CERTS environment variable in the Splunk systemd unit file and point it to t... See more...
The Splunk Add-on for Google Cloud Platform is using the httplib2 library.  What worked for us was to set the HTTPLIB2_CA_CERTS environment variable in the Splunk systemd unit file and point it to the system CA bundle (in our case /etc/ssl/ca-bundle.pem).  Have a look at 'lib/httplib2/certs.py' to understand the logic and alternative solutions.   
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 e... See more...
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 etc..) and when inside the chart a column is clicked a costume search will be opened (in a new window if possible if not same window will be just fine). based of checking if click.name is a number (and it's should be as it should be the name of the source /mnt/support_engineering... )   this is my current chart: <chart> <title>amount of warning per file per Tail</title> <search base="basesearch"> <query>|search | search "WARNNING: " | rex field=_raw "WARNNING: (?&lt;warnning&gt;(?s:.*?))(?=\n\d{5}|$)" | search warnning IN $warning_type$ | search $project$ | search $platform$ | chart count over source by Tail</query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> </drilldown> </chart> The main problem is that whenever i even try condition inside the drilldown a new search is opened instead managing tokens no matter what the condition or what im doing inside. This is what I've tried so far: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="tonumber($click.name$) != $click.name$"> <eval token="form.Tail"> if($click.name2$ == $form.Tail$, "*", $click.name2$) </eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="tonumber($click.name$) == $click.name$"> <link> <param name="target">_blank</param> <param name="search"> index=myindex | search "WARNNING: " </param> </link> </condition> </drilldown> click.name should be the name of the source as those are the columns of my chart   thanks in advanced to helpers
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  ... See more...
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  I think this is because of a new license I recently uploaded to this box. As the old license was due to expire I recently got a new free Splunk license (10GB Splunk Developer License). I received/uploaded it to the test box on Friday, 3 days before the old one was due to expire. I then deleted the old license that day despite it having a few additional days. On Sunday (the day the old license was due to expire), I started getting this login issue. As I can't get past the login screen I can't try and reupload a different license, etc.  Any suggestions? 
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified w... See more...
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified without having the Incident Review page open and improve response time. I know I can set emails individually when a alert triggers, but this would be for every 'new' alert (there are some alerts that are autoclosing) that comes in or with an option to only target high urgency alerts based on volume.  Any advice would be appreciated!  
Thanks! That is understandable. Based on your answers so far, I will think through what would work best, and will get back to you. But either way, I think I got all the answers I needed.
As I said before - while fiddling with multiple different intermediate syslog solutions can work if syslog is the native form of sending events, mixing it with windows usually ends badly one way or a... See more...
As I said before - while fiddling with multiple different intermediate syslog solutions can work if syslog is the native form of sending events, mixing it with windows usually ends badly one way or another. You either get your data broken and non-parseable or have to bend over backwards to force it to work at least half-decently. We don't know your whole environment and what is the original problem you're trying to solve so can't help much here. There is of course additional question whether you need the splunk metadata at all if you're forwarding to a third party syslog receiver anyway.