All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just checked a Splunk Cloud stack and the only users with the delete option are local.  The SAML users do not have the Edit action, therefore no delete.
You're awesome. But now I have a conundrum. The analysts do not like the fact that we added the collection at the end of the alert because now when they go to the splunk link they have accidentally k... See more...
You're awesome. But now I have a conundrum. The analysts do not like the fact that we added the collection at the end of the alert because now when they go to the splunk link they have accidentally kicked off more tickets because they didn't remove the collection before making modifications to the search to investigate an alert. Now I'm trying to figure out how I can collect, and rename fields, while also not impacting their search  
I just ran into this issue and as an easy alternative you can also add: disable_splunk_cert_check = true to every input you have under the stanza. It is mentioned in their guide but for some re... See more...
I just ran into this issue and as an easy alternative you can also add: disable_splunk_cert_check = true to every input you have under the stanza. It is mentioned in their guide but for some reason they did not include it in the GUI option like they suggest it is there. https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector Yet another case where companies think software engineers can be SE's and tech writers.
Do you have a Splunk Universal Forwarder (UF) installed on this Windows instance with the Windows add-on installed/configured to get the events indexed that would typically contain this type of infor... See more...
Do you have a Splunk Universal Forwarder (UF) installed on this Windows instance with the Windows add-on installed/configured to get the events indexed that would typically contain this type of information?
1.  Create and manage users with Splunk Web - Splunk Documentation 2.  Manage orphaned knowledge objects - Splunk Documentation
1. You can't.  Splunk doesn't know if a user is active or not - only that they pass authentication (or not).  A user never signing in is just a user who never signs in rather than an inactive/expired... See more...
1. You can't.  Splunk doesn't know if a user is active or not - only that they pass authentication (or not).  A user never signing in is just a user who never signs in rather than an inactive/expired user.  You can, however, file a support request to have the user removed. 2. Assign the user's KOs to another user.  Go to Settings->All configurations and click the "Reassign Knowledge Objects" button.
I second what @_JP said.  The situation is complicated by the two different environments.  Having on-prem and AWS indexers in the same cluster would quickly run up your AWS data export charges unless... See more...
I second what @_JP said.  The situation is complicated by the two different environments.  Having on-prem and AWS indexers in the same cluster would quickly run up your AWS data export charges unless you put the old indexers in manual detention to keep them from accepting replicated data.  PS should be able to help.
Considering our current setup i.e authentication and Authorization integrated with SAML, how do we 1. mark an user inactive 2. what do we do with his/her knowledge objects.
If you change the stack mode to none you should see them as bars rather then being stacked. Click Format and then select the left most selection under Stack Mode.
Confirm the ycw field has values.  If ycw is null then the stats command will return no results - because there are no values by which to group the stats. Also, the evals should not have the field n... See more...
Confirm the ycw field has values.  If ycw is null then the stats command will return no results - because there are no values by which to group the stats. Also, the evals should not have the field names in double quotes because that treats them a literal strings rather than as field names.  Use single quotes or no quotes around field names.
I've added some fake data before the SPL you provided and when I run it I get results  like the below screenshot. It's hard to say what is going on in your environment and with your data but I would ... See more...
I've added some fake data before the SPL you provided and when I run it I get results  like the below screenshot. It's hard to say what is going on in your environment and with your data but I would tend to think either the base search returned nothing or your fields (FieldA, FieldB, FieldC) don't exist in the data returned from the base search. | makeresults count=300 | eval _time=_time-(86400*(random() % 61)) | eval FieldA=if(random() % 2==1,"True", "False") | eval FieldB=if(random() % 2==1,"True", "False") | eval FieldC=if(random() % 2==1,"True", "False") ```^^^^ Fake data added by me ^^^^``` | eval ycw = strftime(_time, "%Y_%U") | stats count(eval('FieldA'="True")) as FieldA_True, count(eval('FieldB'="True")) as FieldB_True, count(eval('FieldC'="True")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True
I don't recognize that screenshot - is that from an SPL query/dashboard, or the monitoring console?   Keep in mind that there's multiple "timestamps" in Splunk.  You have _time, which is the timest... See more...
I don't recognize that screenshot - is that from an SPL query/dashboard, or the monitoring console?   Keep in mind that there's multiple "timestamps" in Splunk.  You have _time, which is the timestamp from within the event data (or, if there's no time in the event then the timestamp Splunk assigns when the data is seen).  Also know that sometimes the values for _time can get whacky if the data wasn't extracted correctly (often a user issue when configuring new sourcetypes).  This probably doesn't happen as often in _internal. There is also _indextime  which is when Splunk indexed that event.  This is more what Splunk will look at when rolling data from hot to warm to cold to frozen. I don't believe it is exactly this field, but Splunk is looking under the hood where the data is stored for a parallel age of the data to the _indextime when rolling buckets. If you want a quick and dirty way to see what sort of time data lives in your _internal index, run this SPL over all time.  You might see stuff way in the past or way in the future if there's some weird data in the index: index=_internal | stats earliest(_time) as earliest_time, latest(_time) as latest_time, earliest(_indextime) as earliest_indextime, latest(_indextime) as latest_indextime | eval earliest_time=strftime(earliest_time,"%x %r") | eval latest_time=strftime(latest_time,"%x %r") | eval earliest_indextime=strftime(earliest_indextime,"%x %r") | eval latest_indextime=strftime(latest_indextime,"%x %r")   Also keep in mind that Splunk doesn't work with individual events that hit your time-period thresholds.   Rather, it deals with entire buckets.  If your buckets are configured in a way that you have a years worth of data in one bucket, but the frozen setting is 60 days...Splunk won't handle that bucket (like deleting if frozen) unless all data in that bucket hits that time threshold.  So you might still see "old" data showing up in results beyond your time configurations because you still have buckets with pre- and post- threshold data in them. You're touching on some advanced topics in bucket tuning and what Splunk is really doing under the hood.  Here's some helpful docs to help you dig into these concepts.
When you click the export button in the visualization you should see a dropdown with csv as an option. Do you not see this?
This worked great, but how do I display also in the same search, what the first record was and the last record was for the durration. Something like a table below username, day, Duration, First, Last
Yes, this is possible.  I don't know of any write-up on how to do this.  The simplest answer that you'll receive is, "This is a great thing for Splunk PS to help with..." That being said, I feel lik... See more...
Yes, this is possible.  I don't know of any write-up on how to do this.  The simplest answer that you'll receive is, "This is a great thing for Splunk PS to help with..." That being said, I feel like you are on the right track - you can add the new indexers to the cluster, force a rebalance/replication to the new ones, then shutdown the old ones.   Do you have plenty of hardware to POC this in a non-production environment?  I would practice this in not-production, and then once you are able to do it successfully you can repeat in prod.  Also, do you have backups in case anything goes wrong? (Also, @Darthsplunker showing up with a hard-mode question as a brand new community user   I applaud and salute you!)
That particular property is keyed off of your Y fields, not the X-field that you'd like.  For example, to change the colors with your field names, you'd do something like this - but I understand this... See more...
That particular property is keyed off of your Y fields, not the X-field that you'd like.  For example, to change the colors with your field names, you'd do something like this - but I understand this isn't what you're looking for:   <option name="charting.fieldColors">{"limit":0x333333,"spend":0xd93f3c}</option>   I came up with a real hack way to do this...someone who knows more than me might have a better way,,, Since this chart property is keyed off the name of the y-fields, then you could custom-name all the Y-fields so you can reference them in the fieldColors property.  Here is some example SPL with a bunch of evals to really show how you can split these up:   | makeresults format=csv data="market,limit,spend \"AU Pre\", 1462912, 884854 \"AU Post\", 2160567, 1166031 \"DE Pre\", 91217, 76973 \"DE Post\", 160221, 97906" | eval AU_Pre_limit = if(market="AU Pre",limit,null) | eval AU_Post_limit = if(market="AU Post",limit,null) | eval DE_Pre_limit = if(market="DE Pre",limit,null) | eval DE_Post_limit = if(market="DE Post",limit,null) | table market, AU_Pre_limit, AU_Post_limit, DE_Pre_limit, DE_Post_limit, spend     Then you can build a fieldColors property like this:   <option name="charting.fieldColors"> {"AU_Pre_limit":0x333333,"AU_Post_limit":0xd93f3c, "DE_Pre_limit":0xeeeeee,"DE_Post_limit":0x65a637, "spend":0xaa0000} </option>   Here is an example dashboard.  I will also attach the SimpleXML in a PDF so you can try it out:   ** Again...this is very hack and I typically try and keep as much formatting/display info out of my SPL as I can (or at least put it way at the end or do it as a post-process).  This totally breaks any MVC-patterns I like to follow.  But...if you have something you just need to get working and pretty for now, this will do.    
You would do a second stats to roll them up like this ... index=guardium ruleDesc="OS Command Injection" | stats count by dbUser, DBName, serviceName, sql | eval category = case( count < 6, "1-5... See more...
You would do a second stats to roll them up like this ... index=guardium ruleDesc="OS Command Injection" | stats count by dbUser, DBName, serviceName, sql | eval category = case( count < 6, "1-5", count < 11, "6-10", count < 16, "11-15", 1==1, "16+" ) | stats count by category Then you would set up a drilldown on the chart to pass a token to another search and limit it based on the token..
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('Field... See more...
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('FieldC'="True")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True I get 0 result even though there is data. Could anyone please suggest a correct query? BR
Another option would  be to set the delimiter to a comma in the multiselect then in the macro use the IN keyword like this ... index IN ($index_scope$) ... #The rest of the macro# ....
you are correct, some of the fields are automatically extracted as part of the Event heading, but none of the fields I am interested in are available, such as: tracePoint content.attributes[]  //no... See more...
you are correct, some of the fields are automatically extracted as part of the Event heading, but none of the fields I am interested in are available, such as: tracePoint content.attributes[]  //not interested in the headers applicationName applicationVersion environmen By the way, I tried what you suggested: | spath input=data but I see no change in my search results. Thank you