All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What information do you have in Splunk? Which system is the user locked out of?
Please share your two searches (in code blocks)
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null ... See more...
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null why is it not capturing the value
there is a user , he is saying his account is locked i want to check using splunk what is the cause how can i do that
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the... See more...
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the correct value?
What information do you have available to you to help you determine this?
Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2... See more...
Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2 PM the day before with the message.
Thanks for the code. It seems to work with that manually. I need to report Cisco that the zero-code instrumentation is not working as expected. I don't have privileges to open a case for that.
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with U... See more...
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with Universal Forwarder installed. All servers are divided into groups according to the particular application that the servers host. For example, Splunk servers have group 'spl,' remote desktop session servers have group 'rdsh,' etc. Each server has an environment variable with this group value. By design, the access policy to logs was built on these groups. One group - one index. Because of it, each UF input stanza has the option "index = group.". According to this idea, introspection logs of UF agents are related to the SPL (or Splunk) group\index. And here the nuisance started. Sometimes UF agents report about errors that demand some things on the running hosts, for example, restarting the agent manually. I see these errors because I have access to the 'spl' index, but I don't have access to all Windows machines and I have to notify the machine owner about it manually. So, the question is how to create a sort of tag or field on the UF that can help me separate all Splunk UF logs by these groups? Maybe I can use our environment variable to achieve it? I only need to access this field during search time to create various alerts that will notify machine owners instead of me.
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted ... See more...
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted in descending order. That means that you have your max values first. This in turn means that you don't have to eventstats over whole resukt set. Just use streamstats to copy over the first value which must be the maximum value. | streamstats current=t first(field1) as field1max Now all that's left is to filter | where field1=field1max Since we're operating on our initial result we've retained all original fields. Of course for for additional performance boost you can remove unnecessary fields prior to sorting so you don't needlessly drag them around just to get rid of them immediately after if you have a big data set to sort.(Same goes for limiting your processed data volume in with eventstats-based solution)
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV... See more...
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV size is way too small.
I'm not aware of any built-in mechanism that allows you to do so. Maybe some external EDR solution captures that but I can't advise any.
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  v... See more...
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  value ""abcd@ef.gh.com""not poping up  under AppD schem , though the value is getting parsed. I have initiated as string and earlier it worked , once i have deleted schema and created again, value showing as null
Already install TA - Windows and restart in UF also installed it in Indexer but why i still cannot read the output ? did i forget to setting something ? 
Hmm so if one of endpoint got hacked and someone doing running script we cannot collect information from output in cmd/powershell ?
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have ... See more...
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have tried with bucket span and plotted the time chart. But the results were not matching with stats distinctcount. Please help.
You are correct to not wanting to use join; in fact, try not use join even if they are in different indices.  Thank you for illustrating data and desired output.  Here is an idea sourcetype IN (CopL... See more...
You are correct to not wanting to use join; in fact, try not use join even if they are in different indices.  Thank you for illustrating data and desired output.  Here is an idea sourcetype IN (CopLocation, TargetLocation) | eval target_log = replace(_raw, "^[^<]+", "") | spath input=target_log | mvexpand FileTransfer.FileName | eval FileName = coalesce(file_name, 'FileTransfer.FileName') | chart values(_time) over FileName by sourcetype | sort CopyLocation | foreach *Location [eval <<FIELD>> = strftime(<<FIELD>>, "%F %T")] | fillnull TargetLocation value=Pending (Obviously I do not know your sourcetype names. So, adjust the above accordingly.) Here is an emulation to produce the sample data you illustrated | makeresults | eval sourcetype = "CopyLocation", data = mvappend("2024-12-18 17:02:50, file_name=\"XYZ.csv\", file copy success", "2024-12-18 17:02:58, file_name=\"ABC.zip\", file copy success", "2024-12-18 17:03:38, file_name=\"123.docx\", file copy success", "2024-12-18 18:06:19, file_name=\"143.docx\", file copy success") | mvexpand data | eval _time = strptime(replace(data, ",.+", ""), "%F %T") | rename data AS _raw | extract | append [makeresults | eval sourcetype = "TargetLocation", _raw = "2024-12-18 17:30:10 <FileTransfer status=\"success\"> <FileName>XYZ.csv</FileName> <FileName>ABC.zip</FileName> <FileName>123.docx</FileName> </FileTransfer>" | eval _time = strptime(replace(_raw, "<.+", ""), "%F %T")] ``` the above emulates sourcetype IN (CopLocation, TargetLocation) ``` Play with it and compare with real data.
there is a user lets say ABC and I want to check why his AD account is locked .
Lol we are all secretly trying to decipher the sentence  (I thought @bowesmana had both methods covered when I read this last night.)  OK, I think I cranked the code.  Using the same strategy (but d... See more...
Lol we are all secretly trying to decipher the sentence  (I thought @bowesmana had both methods covered when I read this last night.)  OK, I think I cranked the code.  Using the same strategy (but deterministic for easy validation) I constructed this mock dataset: title1 title4 value Title1:B Title4-Y 1 Title1:C Title4-X 2 Title1:A Title4-W 3 Title1:B Title4-V 4 Title1:C Title4-U 0 Title1:A Title4-T 1 Title1:B Title4-S 2 Title1:C Title4-R 3 Title1:A Title4-Q 4 Title1:B Title4-Z 0 Title1:C Title4-Y 1 Title1:A Title4-X 2 Title1:B Title4-W 3 Title1:C Title4-V 4 Title1:A Title4-U 0 Title1:B Title4-T 1 Title1:C Title4-S 2 Title1:A Title4-R 3 Title1:B Title4-Q 4 Title1:C Title4-Z 0 Title1:A Title4-Y 1 Title1:B Title4-X 2 Title1:C Title4-W 3 Title1:A Title4-V 4 Title1:B Title4-U 0 I think the semantics is: Find the Title4 that corresponds to the maximum value in the whole set - in this case, Title4-Q and Title4-V, as it corresponds to value 4; then, find all rows with these Title4 group them by Title1. I.e.,   | eventstats max(value) as max_val | where value == max_val | stats values(title4) as title4 by title1   The output for the mock data is title1 title4 Title1:A Title4-Q Title4-V Title1:B Title4-Q Title4-V Title1:C Title4-V Here is the emulation   | makeresults count=25 | streamstats count | eval value = count % 5 | eval title1="Title1:".mvindex(split("ABCDE",""), count % 3) | eval title4="Title4-".mvindex(split("ZYXWVUTSRQ",""), count % 10) | fields - _time count ``` data emulation above ```   Similarly a double-stats strategy can be construed.
I have also had a use case, where I had 8 million rows of user data and I needed to have that enrich index data with data from that 8m row lookup. I ended up as a short term stopgap, indexing the 8m ... See more...
I have also had a use case, where I had 8 million rows of user data and I needed to have that enrich index data with data from that 8m row lookup. I ended up as a short term stopgap, indexing the 8m user rows and then did dataset 1 + user data from index and correlated them using stats by xx, because the performance on a lookup of that size was not good enough.