Ok - I understand what you're looking at. I'm also assuming you're not on Splunk Cloud since I will be referring to filesystem level things. Things like this are set within the indexes.conf configu...
See more...
Ok - I understand what you're looking at. I'm also assuming you're not on Splunk Cloud since I will be referring to filesystem level things. Things like this are set within the indexes.conf configuration file. If you have your own defined indexes, you might have configured some of these time-to-live settings there. But, within the Splunk install there is a default set of indexes configured. For example, the default basic configs are in %SPLUNKHOME%/etc\system\default\indexes.conf DO NOT EDIT THAT FILE. But you can look at it to see where some of the configurations might be coming from for internal Splunk indexes, the default values for stuff like frozenTimePeriodInSecs, etc. When you create your own indexes Splunk creates additional indexes.conf files in various places depending on how you configured things (%SPLUNKHOME%/etc\system\local\indexes.conf or maybe in %SPLUNKHOME%/etc\apps\some_app_you_created\local\indexes.conf). Even if you tried to override the "default" value for something, it might not be in effect because of the Splunk conf file precedence.
Okay, I'm getting closer. I get a line chart to show up now. However, I need 3 different lines representing the 3 different customers and the Number of submissions for each day. I just need to get...
See more...
Okay, I'm getting closer. I get a line chart to show up now. However, I need 3 different lines representing the 3 different customers and the Number of submissions for each day. I just need to get the SPL right. Thanks for your assistance and patience!!
I just checked a Splunk Cloud stack and the only users with the delete option are local. The SAML users do not have the Edit action, therefore no delete.
You're awesome. But now I have a conundrum. The analysts do not like the fact that we added the collection at the end of the alert because now when they go to the splunk link they have accidentally k...
See more...
You're awesome. But now I have a conundrum. The analysts do not like the fact that we added the collection at the end of the alert because now when they go to the splunk link they have accidentally kicked off more tickets because they didn't remove the collection before making modifications to the search to investigate an alert. Now I'm trying to figure out how I can collect, and rename fields, while also not impacting their search
I just ran into this issue and as an easy alternative you can also add: disable_splunk_cert_check = true to every input you have under the stanza. It is mentioned in their guide but for some re...
See more...
I just ran into this issue and as an easy alternative you can also add: disable_splunk_cert_check = true to every input you have under the stanza. It is mentioned in their guide but for some reason they did not include it in the GUI option like they suggest it is there. https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector Yet another case where companies think software engineers can be SE's and tech writers.
Do you have a Splunk Universal Forwarder (UF) installed on this Windows instance with the Windows add-on installed/configured to get the events indexed that would typically contain this type of infor...
See more...
Do you have a Splunk Universal Forwarder (UF) installed on this Windows instance with the Windows add-on installed/configured to get the events indexed that would typically contain this type of information?
1. You can't. Splunk doesn't know if a user is active or not - only that they pass authentication (or not). A user never signing in is just a user who never signs in rather than an inactive/expired...
See more...
1. You can't. Splunk doesn't know if a user is active or not - only that they pass authentication (or not). A user never signing in is just a user who never signs in rather than an inactive/expired user. You can, however, file a support request to have the user removed. 2. Assign the user's KOs to another user. Go to Settings->All configurations and click the "Reassign Knowledge Objects" button.
I second what @_JP said. The situation is complicated by the two different environments. Having on-prem and AWS indexers in the same cluster would quickly run up your AWS data export charges unless...
See more...
I second what @_JP said. The situation is complicated by the two different environments. Having on-prem and AWS indexers in the same cluster would quickly run up your AWS data export charges unless you put the old indexers in manual detention to keep them from accepting replicated data. PS should be able to help.
Considering our current setup i.e authentication and Authorization integrated with SAML, how do we 1. mark an user inactive 2. what do we do with his/her knowledge objects.
If you change the stack mode to none you should see them as bars rather then being stacked. Click Format and then select the left most selection under Stack Mode.
Confirm the ycw field has values. If ycw is null then the stats command will return no results - because there are no values by which to group the stats. Also, the evals should not have the field n...
See more...
Confirm the ycw field has values. If ycw is null then the stats command will return no results - because there are no values by which to group the stats. Also, the evals should not have the field names in double quotes because that treats them a literal strings rather than as field names. Use single quotes or no quotes around field names.
I've added some fake data before the SPL you provided and when I run it I get results like the below screenshot. It's hard to say what is going on in your environment and with your data but I would ...
See more...
I've added some fake data before the SPL you provided and when I run it I get results like the below screenshot. It's hard to say what is going on in your environment and with your data but I would tend to think either the base search returned nothing or your fields (FieldA, FieldB, FieldC) don't exist in the data returned from the base search. | makeresults count=300
| eval _time=_time-(86400*(random() % 61))
| eval FieldA=if(random() % 2==1,"True", "False")
| eval FieldB=if(random() % 2==1,"True", "False")
| eval FieldC=if(random() % 2==1,"True", "False")
```^^^^ Fake data added by me ^^^^```
| eval ycw = strftime(_time, "%Y_%U")
| stats count(eval('FieldA'="True")) as FieldA_True, count(eval('FieldB'="True")) as FieldB_True, count(eval('FieldC'="True")) as FieldC_True by ycw
| table ycw, FieldA_True, FieldB_True, FieldC_True
I don't recognize that screenshot - is that from an SPL query/dashboard, or the monitoring console? Keep in mind that there's multiple "timestamps" in Splunk. You have _time, which is the timest...
See more...
I don't recognize that screenshot - is that from an SPL query/dashboard, or the monitoring console? Keep in mind that there's multiple "timestamps" in Splunk. You have _time, which is the timestamp from within the event data (or, if there's no time in the event then the timestamp Splunk assigns when the data is seen). Also know that sometimes the values for _time can get whacky if the data wasn't extracted correctly (often a user issue when configuring new sourcetypes). This probably doesn't happen as often in _internal. There is also _indextime which is when Splunk indexed that event. This is more what Splunk will look at when rolling data from hot to warm to cold to frozen. I don't believe it is exactly this field, but Splunk is looking under the hood where the data is stored for a parallel age of the data to the _indextime when rolling buckets. If you want a quick and dirty way to see what sort of time data lives in your _internal index, run this SPL over all time. You might see stuff way in the past or way in the future if there's some weird data in the index: index=_internal | stats earliest(_time) as earliest_time, latest(_time) as latest_time, earliest(_indextime) as earliest_indextime, latest(_indextime) as latest_indextime
| eval earliest_time=strftime(earliest_time,"%x %r")
| eval latest_time=strftime(latest_time,"%x %r")
| eval earliest_indextime=strftime(earliest_indextime,"%x %r")
| eval latest_indextime=strftime(latest_indextime,"%x %r") Also keep in mind that Splunk doesn't work with individual events that hit your time-period thresholds. Rather, it deals with entire buckets. If your buckets are configured in a way that you have a years worth of data in one bucket, but the frozen setting is 60 days...Splunk won't handle that bucket (like deleting if frozen) unless all data in that bucket hits that time threshold. So you might still see "old" data showing up in results beyond your time configurations because you still have buckets with pre- and post- threshold data in them. You're touching on some advanced topics in bucket tuning and what Splunk is really doing under the hood. Here's some helpful docs to help you dig into these concepts.
This worked great, but how do I display also in the same search, what the first record was and the last record was for the durration. Something like a table below username, day, Duration, First, Last
Yes, this is possible. I don't know of any write-up on how to do this. The simplest answer that you'll receive is, "This is a great thing for Splunk PS to help with..." That being said, I feel lik...
See more...
Yes, this is possible. I don't know of any write-up on how to do this. The simplest answer that you'll receive is, "This is a great thing for Splunk PS to help with..." That being said, I feel like you are on the right track - you can add the new indexers to the cluster, force a rebalance/replication to the new ones, then shutdown the old ones. Do you have plenty of hardware to POC this in a non-production environment? I would practice this in not-production, and then once you are able to do it successfully you can repeat in prod. Also, do you have backups in case anything goes wrong? (Also, @Darthsplunker showing up with a hard-mode question as a brand new community user I applaud and salute you!)