All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Anubis wrote: 1. how many events in base search: 1.6 million This is your problem. Although not specifically about dashboard studio, which you seem to be using as you talk about chained ... See more...
@Anubis wrote: 1. how many events in base search: 1.6 million This is your problem. Although not specifically about dashboard studio, which you seem to be using as you talk about chained searches, the limit is 500,000 events. What you are intending to do, i.e. post filter a base, is indeed logical, but there is no way you can manage 1.6 million event in a base search - see this link for a discussion on base searches - a chained search is what is referred to as a post-processing search. https://docs.splunk.com/Documentation/Splunk/9.3.2/Viz/Savedsearches with particular reference to Event retention and Limit base search results and post-process complexity and do not think about increasing limits.conf, that will not make things bette You can still bump the post processing/chained search out of the base, but you need to consider each use case of your base search to work out how that post filtering can work. If your panel searches are all doing things like stats, then move the stats down to the base search. You can always do something like | stats count by a b c d and if you only want a count by c, you can then do this in your panel search. | stats sum(c) as c  
Hello, thanks for the response. I forgot to include the 'by device_name' in my post. Sorry about that.  1. how many events in base search: 1.6 million 2. I used the tokens in the chained search... See more...
Hello, thanks for the response. I forgot to include the 'by device_name' in my post. Sorry about that.  1. how many events in base search: 1.6 million 2. I used the tokens in the chained search to not call the index every time a token is changed. Seemed logical 3. Putting head infront of table is better. honest mistake
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search int... See more...
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search into a table with results from the first like this: index=blah source=blah | rex variableA=blah, field1=blah,field2=blah,field3=blah, index=blah source=$variableA$ | rex field4=blah table field1, field2, field3,field4 not sure how this gets done?
This is not really what chained searches are good for - what you are doing is basically loading the entire dataset into memory in your index= base search. How many events do you have - there is a lim... See more...
This is not really what chained searches are good for - what you are doing is basically loading the entire dataset into memory in your index= base search. How many events do you have - there is a limit to the number of results a base search can retain. For this type of usage, you will often make the search slower because all the processing is done on the search head rather than using the benefit of search distribution. The issue you are seeing I suspect will be related to the data volumes you are trying to manage through your base search. Depending on what other searches you have, your search chain might be better expressed by this Base search   index=yum sourcetype=woohoo earliest=-12h@h | stats sum(field3) as field3 by field1 field2   Chain search   | search field1="$field1_tok$" AND field2="$field2_tok$"   Panel Your original search makes no sense, because you are doing a stats command and then trying to use device_name which does not exist after the stats. Also you only have a single row after the stats, so the sort and head are pointless....   | sort - field3 | head 10 | table device_name field3   Note, you should ALWAYS put your transforming commands as late as possible in any pipeline - i.e. it is better to put the head BEFORE the table, so in a normal search you would only get 10 results from the indexer to the search head rather than sending all results and discarding all but 10. The above will depend on what other usage you are making from your base search, but please come back with more details if you still need advice.
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 fie... See more...
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 field3     Chained search:     | search field1="$field1_tok$" AND field2="$field2_tok$"     Panel:     | stats sum(field3) as field3 by device_name | sort - field3 | table device_name field3 | head 10     Everything works fine but when I change tokens the panel loads  a cached version of the table with incorrect values. 5 to 10 seconds later the panel updates with the correct values but without any indication. So, Is there a setting to turn off these cached results?  
Thought I'd add to this post, in regards to using a curl command to push a lookup file to a Splunk instance, as other Splunk users may find it useful.  It's not a replacement for @mthcht excellent py... See more...
Thought I'd add to this post, in regards to using a curl command to push a lookup file to a Splunk instance, as other Splunk users may find it useful.  It's not a replacement for @mthcht excellent python scripts but it is often easy to use curl commands when testing and validating things.   Here's a worked example that creates a simple lookup file (tested against Cloud stack and lookup editor v4.0.4) ...   curl -sk --request POST https://localhost:8089/services/data/lookup_edit/lookup_contents \ -H "Authorization: Splunk $MYTOKEN" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d timeout=10 \ -d namespace=search \ -d lookup_file=lookupfilename.csv \ -d contents=[[\"field1\",\"field2\"],[\"value1\",\"value2\"]] \ -d owner=nobody # n.b. owner is only needed when creating new lookups - a 'user' name creates the new lookup file with private permissions, whereas 'nobody' results in it being shared globally   Note, the 'contents' format must be a 2D JSON array. To make this easier, 'contents' can also be added via a file, like this ...   $ cat <<EOF > myLocalLookup.json contents=[["field1","field2"],["value1","value2"]] EOF $ curl -sk --request POST https://localhost:8089/services/data/lookup_edit/lookup_contents \ -H "Authorization: Splunk $MYTOKEN" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d timeout=10 \ -d namespace=search \ -d lookup_file=lookupfilename.csv \ -d @myLocalLookup.json \ -d owner=nobody   Now, to really make this useful, existing CSV file's need to be formatted as JSON. There are multiple ways this could be done, but here is a simple python oneliner (*nix tested) that reads in a CSV file on stdin and outputs it as JSON.   (python -c $'import sys;import csv;import json;\nwith sys.stdin as f: csv_array=list(csv.reader(f)); print("contents="+json.dumps(csv_array))' > mylocalLookup.json) < myLocalLookup.csv   Hopefully. others may find this useful too.
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I ... See more...
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I missing something? https://docs.splunk.com/Documentation/ES/8.0.1/Install/UpgradetoNewVersion https://splunkbase.splunk.com/app/263/
Thank you. IMHO, it's a change that probably should have been more widely announced and probably involved a touchpoint from account teams. This was a deviation from the way the UF's have operated sin... See more...
Thank you. IMHO, it's a change that probably should have been more widely announced and probably involved a touchpoint from account teams. This was a deviation from the way the UF's have operated since Day 1. (Yes, it's mentioned in the release notes ... but with no specific solutions to commonly ingested logs ... ) Like the release notes could have at least mentioned that in order to read Sysmon logs, you need to add "SplunkForwarder" to the Event Log Readers group. That took a while to figure out ... and yeah, it does appear that Event Log Readers doesn't imply all logs.  So, yes. Application logs are going to be tricky to remediate. But at least we're not in danger of exceeding our license threshold. (¬_¬)
This is a common problem with any tool. Just because for a long time UF on Windows was made to be run as Local System, doesn't mean that it's the proper approach. It's up to you and your windows adm... See more...
This is a common problem with any tool. Just because for a long time UF on Windows was made to be run as Local System, doesn't mean that it's the proper approach. It's up to you and your windows admins to know what permissions are needed to access various parts of your environment. In order to access eventlogs you have to either edit the acls for eventlogs (which is a really ugly thing to do) or add the UF user to a group (I don't remember the exact name of the group - Logreaders?). But if you want to access some random files on your system it depends on the ownership and ACLs on those files/directories. There is no single good answer. This particular directory is most probab;y connected to IIS but others will correspond to other services.
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the ... See more...
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the Event Log Readers group. Unfortunately, that hasn't resolved all access issues. IIS logs for instance .. When I deploy a scripted input to a test client to provide a directory listing of C:\Windows\System32\Logfiles\HTTPERR ... the internal index gets a variety of errors, one of which is included below. (yes, the directory exists) Get-ChildItem : Access to the path 'C:\Windows\System32\Logfiles\HTTPERR' is denied  So, other than having our IT staff reinstall the UF everywhere to run as a System privileged user as it has run in every version I've ever worked with ... How are we to know what Group the SplunkForwarder user needs to be added to read data that is not under the purview of "Event Log Readers"
Of course, checking when there are "missing" events is one of possible ways of checking uptime. But that's a completely different problem.
@PickleRick is correct if the data is not in the logs you can't eval from nothing. That said one way we have combated down time in the past is calculating the duration since the last anything log en... See more...
@PickleRick is correct if the data is not in the logs you can't eval from nothing. That said one way we have combated down time in the past is calculating the duration since the last anything log entry from the host.  This is never really 100% because it could be a transport issue but the asset is still alive, or any number of other things where the asset is alive but not 'logging'.  However, any calculation of x>acceptable duration of the last log is always a good thing to know.  Pair that up with a good CMDB record to prevent tracking decommissioned assets. There are many alternatives to lack of quality data, but they each come with pros and cons to be accounted for.
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the asser... See more...
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the assertion, but Splunk won't let me in. I get this error: "Saml response does not contain group information." I know Splunk looks for a 'role' variable, but our assertion does not return that. Instead, it returns "memberOf", and I added that to authentication.conf: [authenticationResponseAttrMap_SAML] role = memberOf I also map the role under roleMap_SAML. It seems like no matter what I do, no matter what I put, I get the "Saml response does not contain group information." response.  I have a ticket open with tech support, but at the moment, they're not sure what the issue is.  Here's a snippet (masked) of the assertion response: <saml2:Attribute FriendlyName="memberOf" Name="urn:oid:1.2.xxx.xxxxxx.1.2.102" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xsd:string"> xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:some-group </saml2:AttributeValue> </saml2:Attribute> Feeling out of options, I asked ChatGPT (I know, I know), and it said that the namespace our assertion is using may be the issue. It said that Splunk uses the "saml" namespace, but our IdP is returning "saml2". I don't know if that's the actual issue nor, if it is, what to do about it.  splunkd.log shows the error message that I'm seeing in the web interface: 12-12-2024 15:14:24.611 -0500 ERROR Saml [847764 webui] - No value found in SamlResponse for match key=saml:AttributeStatement/saml:Attribute attrName=memberOf err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute I've looked at the Splunk SAML docs, but don't see anything about namespacing, so maybe ChatGPT just made that up.  What exactly is Splunk looking for that I'm not providing?  If anyone has any suggestions or insight, please let me know. Thank you!
I'm not sure if I understand your question properly. Are you asking how to find a timestamp which is not included in the data you have? Well, if it's not there you need to make sure it's exported fro... See more...
I'm not sure if I understand your question properly. Are you asking how to find a timestamp which is not included in the data you have? Well, if it's not there you need to make sure it's exported from the source somehow. It's more a Solarwinds question than a Splunk one.
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running... See more...
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running into an issue where I cant get the correct output for how long an asset has been down.  In SolarWinds you can see Trigger time in the Alert Status Overview. This shows the exact date and time the node went down.  I cannot find a field from the raw data between both sourcetypes that will give me that output. I want to use eval to show how much time has passed since the trigger. Does anyone know how to achieve this?     
Thanks I will give it a try 
Hi Every index has file which told last used bucket number. You should also update this to refer correct number on node where you have copied those buckets. Of course if you have copied whole indexes... See more...
Hi Every index has file which told last used bucket number. You should also update this to refer correct number on node where you have copied those buckets. Of course if you have copied whole indexes directory then you probably have copied also those files too. If you haven’t copy those them indexer could overwrite old buckets with new events. r. Ismo
Hi @joe06031990  you can find out locations where SSL config present following command helpfull to get locations other tha default from command promt navigate to splunk--->bin run following... See more...
Hi @joe06031990  you can find out locations where SSL config present following command helpfull to get locations other tha default from command promt navigate to splunk--->bin run following command splunk btool inputs list ssl --debug | grep -i local use findstr instead of grep in case of windows 
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’... See more...
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’t have ssl options in inputs.conf just wondered if there was any other locations to check on the universal forwarder as it works fine for other servers.
How did you solve it?