All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I ... See more...
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I missing something? https://docs.splunk.com/Documentation/ES/8.0.1/Install/UpgradetoNewVersion https://splunkbase.splunk.com/app/263/
Thank you. IMHO, it's a change that probably should have been more widely announced and probably involved a touchpoint from account teams. This was a deviation from the way the UF's have operated sin... See more...
Thank you. IMHO, it's a change that probably should have been more widely announced and probably involved a touchpoint from account teams. This was a deviation from the way the UF's have operated since Day 1. (Yes, it's mentioned in the release notes ... but with no specific solutions to commonly ingested logs ... ) Like the release notes could have at least mentioned that in order to read Sysmon logs, you need to add "SplunkForwarder" to the Event Log Readers group. That took a while to figure out ... and yeah, it does appear that Event Log Readers doesn't imply all logs.  So, yes. Application logs are going to be tricky to remediate. But at least we're not in danger of exceeding our license threshold. (¬_¬)
This is a common problem with any tool. Just because for a long time UF on Windows was made to be run as Local System, doesn't mean that it's the proper approach. It's up to you and your windows adm... See more...
This is a common problem with any tool. Just because for a long time UF on Windows was made to be run as Local System, doesn't mean that it's the proper approach. It's up to you and your windows admins to know what permissions are needed to access various parts of your environment. In order to access eventlogs you have to either edit the acls for eventlogs (which is a really ugly thing to do) or add the UF user to a group (I don't remember the exact name of the group - Logreaders?). But if you want to access some random files on your system it depends on the ownership and ACLs on those files/directories. There is no single good answer. This particular directory is most probab;y connected to IIS but others will correspond to other services.
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the ... See more...
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the Event Log Readers group. Unfortunately, that hasn't resolved all access issues. IIS logs for instance .. When I deploy a scripted input to a test client to provide a directory listing of C:\Windows\System32\Logfiles\HTTPERR ... the internal index gets a variety of errors, one of which is included below. (yes, the directory exists) Get-ChildItem : Access to the path 'C:\Windows\System32\Logfiles\HTTPERR' is denied  So, other than having our IT staff reinstall the UF everywhere to run as a System privileged user as it has run in every version I've ever worked with ... How are we to know what Group the SplunkForwarder user needs to be added to read data that is not under the purview of "Event Log Readers"
Of course, checking when there are "missing" events is one of possible ways of checking uptime. But that's a completely different problem.
@PickleRick is correct if the data is not in the logs you can't eval from nothing. That said one way we have combated down time in the past is calculating the duration since the last anything log en... See more...
@PickleRick is correct if the data is not in the logs you can't eval from nothing. That said one way we have combated down time in the past is calculating the duration since the last anything log entry from the host.  This is never really 100% because it could be a transport issue but the asset is still alive, or any number of other things where the asset is alive but not 'logging'.  However, any calculation of x>acceptable duration of the last log is always a good thing to know.  Pair that up with a good CMDB record to prevent tracking decommissioned assets. There are many alternatives to lack of quality data, but they each come with pros and cons to be accounted for.
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the asser... See more...
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the assertion, but Splunk won't let me in. I get this error: "Saml response does not contain group information." I know Splunk looks for a 'role' variable, but our assertion does not return that. Instead, it returns "memberOf", and I added that to authentication.conf: [authenticationResponseAttrMap_SAML] role = memberOf I also map the role under roleMap_SAML. It seems like no matter what I do, no matter what I put, I get the "Saml response does not contain group information." response.  I have a ticket open with tech support, but at the moment, they're not sure what the issue is.  Here's a snippet (masked) of the assertion response: <saml2:Attribute FriendlyName="memberOf" Name="urn:oid:1.2.xxx.xxxxxx.1.2.102" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xsd:string"> xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:some-group </saml2:AttributeValue> </saml2:Attribute> Feeling out of options, I asked ChatGPT (I know, I know), and it said that the namespace our assertion is using may be the issue. It said that Splunk uses the "saml" namespace, but our IdP is returning "saml2". I don't know if that's the actual issue nor, if it is, what to do about it.  splunkd.log shows the error message that I'm seeing in the web interface: 12-12-2024 15:14:24.611 -0500 ERROR Saml [847764 webui] - No value found in SamlResponse for match key=saml:AttributeStatement/saml:Attribute attrName=memberOf err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute I've looked at the Splunk SAML docs, but don't see anything about namespacing, so maybe ChatGPT just made that up.  What exactly is Splunk looking for that I'm not providing?  If anyone has any suggestions or insight, please let me know. Thank you!
I'm not sure if I understand your question properly. Are you asking how to find a timestamp which is not included in the data you have? Well, if it's not there you need to make sure it's exported fro... See more...
I'm not sure if I understand your question properly. Are you asking how to find a timestamp which is not included in the data you have? Well, if it's not there you need to make sure it's exported from the source somehow. It's more a Solarwinds question than a Splunk one.
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running... See more...
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running into an issue where I cant get the correct output for how long an asset has been down.  In SolarWinds you can see Trigger time in the Alert Status Overview. This shows the exact date and time the node went down.  I cannot find a field from the raw data between both sourcetypes that will give me that output. I want to use eval to show how much time has passed since the trigger. Does anyone know how to achieve this?     
Thanks I will give it a try 
Hi Every index has file which told last used bucket number. You should also update this to refer correct number on node where you have copied those buckets. Of course if you have copied whole indexes... See more...
Hi Every index has file which told last used bucket number. You should also update this to refer correct number on node where you have copied those buckets. Of course if you have copied whole indexes directory then you probably have copied also those files too. If you haven’t copy those them indexer could overwrite old buckets with new events. r. Ismo
Hi @joe06031990  you can find out locations where SSL config present following command helpfull to get locations other tha default from command promt navigate to splunk--->bin run following... See more...
Hi @joe06031990  you can find out locations where SSL config present following command helpfull to get locations other tha default from command promt navigate to splunk--->bin run following command splunk btool inputs list ssl --debug | grep -i local use findstr instead of grep in case of windows 
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’... See more...
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’t have ssl options in inputs.conf just wondered if there was any other locations to check on the universal forwarder as it works fine for other servers.
How did you solve it?
Hi @Khalid.Rehan, Thank you for updating the thread and letting us know. 
@gcusello that worked great, thank you. Do you also happen to know the best way to add the totals for each carrier like on line 5 and 9 on my example chart? Like appendpipe?
I have the Microsoft Teams Add-on for Splunk installed and setup the inputs for the webhook.  When I tried to curl the webhook using the internal ip and the port that I have it set to, I get a faile... See more...
I have the Microsoft Teams Add-on for Splunk installed and setup the inputs for the webhook.  When I tried to curl the webhook using the internal ip and the port that I have it set to, I get a failed to connect error. Possibly, part of the issue could be that I don't have the webhook set to a HTTPS. Unfortunately, I'm not sure how to make the webhook accessible to a HTTPS. This isn't something I typically do. I've tried looking up how to make a my webhook accessible, but I haven't had any luck or nothing that made clear sense to me.
Hi @belleke , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @YuliyaVassilyev , at first Splunk isn't Excel! anyway you could try something like this: <your_search> | eval col=Region."|".Director | bin span=1mon _time | chart count OVER col BY _time | re... See more...
Hi @YuliyaVassilyev , at first Splunk isn't Excel! anyway you could try something like this: <your_search> | eval col=Region."|".Director | bin span=1mon _time | chart count OVER col BY _time | rex field=col "^(?<Region>[^\|]+)\|(?<Director>.*)" | fields - col | table Region Director * | addcoltotals | addtotals  then to add partial totals. Ciao. Giuseppe
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Numb... See more...
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Number to make a count. Please let me know if you have an efficient way to do this in SPL. Thank you very much!