All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2... See more...
Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2 PM the day before with the message.
Thanks for the code. It seems to work with that manually. I need to report Cisco that the zero-code instrumentation is not working as expected. I don't have privileges to open a case for that.
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with U... See more...
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with Universal Forwarder installed. All servers are divided into groups according to the particular application that the servers host. For example, Splunk servers have group 'spl,' remote desktop session servers have group 'rdsh,' etc. Each server has an environment variable with this group value. By design, the access policy to logs was built on these groups. One group - one index. Because of it, each UF input stanza has the option "index = group.". According to this idea, introspection logs of UF agents are related to the SPL (or Splunk) group\index. And here the nuisance started. Sometimes UF agents report about errors that demand some things on the running hosts, for example, restarting the agent manually. I see these errors because I have access to the 'spl' index, but I don't have access to all Windows machines and I have to notify the machine owner about it manually. So, the question is how to create a sort of tag or field on the UF that can help me separate all Splunk UF logs by these groups? Maybe I can use our environment variable to achieve it? I only need to access this field during search time to create various alerts that will notify machine owners instead of me.
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted ... See more...
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted in descending order. That means that you have your max values first. This in turn means that you don't have to eventstats over whole resukt set. Just use streamstats to copy over the first value which must be the maximum value. | streamstats current=t first(field1) as field1max Now all that's left is to filter | where field1=field1max Since we're operating on our initial result we've retained all original fields. Of course for for additional performance boost you can remove unnecessary fields prior to sorting so you don't needlessly drag them around just to get rid of them immediately after if you have a big data set to sort.(Same goes for limiting your processed data volume in with eventstats-based solution)
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV... See more...
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV size is way too small.
I'm not aware of any built-in mechanism that allows you to do so. Maybe some external EDR solution captures that but I can't advise any.
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  v... See more...
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  value ""abcd@ef.gh.com""not poping up  under AppD schem , though the value is getting parsed. I have initiated as string and earlier it worked , once i have deleted schema and created again, value showing as null
Already install TA - Windows and restart in UF also installed it in Indexer but why i still cannot read the output ? did i forget to setting something ? 
Hmm so if one of endpoint got hacked and someone doing running script we cannot collect information from output in cmd/powershell ?
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have ... See more...
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have tried with bucket span and plotted the time chart. But the results were not matching with stats distinctcount. Please help.
You are correct to not wanting to use join; in fact, try not use join even if they are in different indices.  Thank you for illustrating data and desired output.  Here is an idea sourcetype IN (CopL... See more...
You are correct to not wanting to use join; in fact, try not use join even if they are in different indices.  Thank you for illustrating data and desired output.  Here is an idea sourcetype IN (CopLocation, TargetLocation) | eval target_log = replace(_raw, "^[^<]+", "") | spath input=target_log | mvexpand FileTransfer.FileName | eval FileName = coalesce(file_name, 'FileTransfer.FileName') | chart values(_time) over FileName by sourcetype | sort CopyLocation | foreach *Location [eval <<FIELD>> = strftime(<<FIELD>>, "%F %T")] | fillnull TargetLocation value=Pending (Obviously I do not know your sourcetype names. So, adjust the above accordingly.) Here is an emulation to produce the sample data you illustrated | makeresults | eval sourcetype = "CopyLocation", data = mvappend("2024-12-18 17:02:50, file_name=\"XYZ.csv\", file copy success", "2024-12-18 17:02:58, file_name=\"ABC.zip\", file copy success", "2024-12-18 17:03:38, file_name=\"123.docx\", file copy success", "2024-12-18 18:06:19, file_name=\"143.docx\", file copy success") | mvexpand data | eval _time = strptime(replace(data, ",.+", ""), "%F %T") | rename data AS _raw | extract | append [makeresults | eval sourcetype = "TargetLocation", _raw = "2024-12-18 17:30:10 <FileTransfer status=\"success\"> <FileName>XYZ.csv</FileName> <FileName>ABC.zip</FileName> <FileName>123.docx</FileName> </FileTransfer>" | eval _time = strptime(replace(_raw, "<.+", ""), "%F %T")] ``` the above emulates sourcetype IN (CopLocation, TargetLocation) ``` Play with it and compare with real data.
there is a user lets say ABC and I want to check why his AD account is locked .
Lol we are all secretly trying to decipher the sentence  (I thought @bowesmana had both methods covered when I read this last night.)  OK, I think I cranked the code.  Using the same strategy (but d... See more...
Lol we are all secretly trying to decipher the sentence  (I thought @bowesmana had both methods covered when I read this last night.)  OK, I think I cranked the code.  Using the same strategy (but deterministic for easy validation) I constructed this mock dataset: title1 title4 value Title1:B Title4-Y 1 Title1:C Title4-X 2 Title1:A Title4-W 3 Title1:B Title4-V 4 Title1:C Title4-U 0 Title1:A Title4-T 1 Title1:B Title4-S 2 Title1:C Title4-R 3 Title1:A Title4-Q 4 Title1:B Title4-Z 0 Title1:C Title4-Y 1 Title1:A Title4-X 2 Title1:B Title4-W 3 Title1:C Title4-V 4 Title1:A Title4-U 0 Title1:B Title4-T 1 Title1:C Title4-S 2 Title1:A Title4-R 3 Title1:B Title4-Q 4 Title1:C Title4-Z 0 Title1:A Title4-Y 1 Title1:B Title4-X 2 Title1:C Title4-W 3 Title1:A Title4-V 4 Title1:B Title4-U 0 I think the semantics is: Find the Title4 that corresponds to the maximum value in the whole set - in this case, Title4-Q and Title4-V, as it corresponds to value 4; then, find all rows with these Title4 group them by Title1. I.e.,   | eventstats max(value) as max_val | where value == max_val | stats values(title4) as title4 by title1   The output for the mock data is title1 title4 Title1:A Title4-Q Title4-V Title1:B Title4-Q Title4-V Title1:C Title4-V Here is the emulation   | makeresults count=25 | streamstats count | eval value = count % 5 | eval title1="Title1:".mvindex(split("ABCDE",""), count % 3) | eval title4="Title4-".mvindex(split("ZYXWVUTSRQ",""), count % 10) | fields - _time count ``` data emulation above ```   Similarly a double-stats strategy can be construed.
I have also had a use case, where I had 8 million rows of user data and I needed to have that enrich index data with data from that 8m row lookup. I ended up as a short term stopgap, indexing the 8m ... See more...
I have also had a use case, where I had 8 million rows of user data and I needed to have that enrich index data with data from that 8m row lookup. I ended up as a short term stopgap, indexing the 8m user rows and then did dataset 1 + user data from index and correlated them using stats by xx, because the performance on a lookup of that size was not good enough.  
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warn... See more...
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warning msg - "Error in 'lookup' command: Could not construct lookup 'test_lenlookup, data'. See search.log for more details." (on older splunk versions i remember this search.log, but nowadays both search.log and searches.log are not available)   https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/WhatSplunklogsaboutitself as per what Splunk logs about itself, it should be "See searches.log for more details." one more bigger issue -both search.log or searches.log are not available. All these searches are not returning anything (the doc says that - The Splunk search logs are located in sub-folders under $SPLUNK_HOME/var/run/splunk/dispatch/. )       index=_* source="*search.log" OR index=_* source="*searches.log" OR index=_* source="C:\Program Files\Splunk\var\run\splunk\dispatch*"         will post this to Splunk Slack as well, thanks.  If any post helped you in anyway, pls consider adding a karma point, thanks. 
Hi @arunkuriakose  >>> i am trying to visualise this in such a way that i have a live dashboard which shows me which users are passing through which gate  "visualizing" this thru a "live dashboard"... See more...
Hi @arunkuriakose  >>> i am trying to visualise this in such a way that i have a live dashboard which shows me which users are passing through which gate  "visualizing" this thru a "live dashboard".... i understand this requirement but it may be difficult to implement.  maybe, reconsider like this: 1. have a basic dashboard with two panels 2. one panel, simple in design, for the gate1, this panel will show you which emp_id crosses this gate1 at what time time   emp_id 9am    1234 9:05am 2383 3. and have another panel for gate2 in the same design logic.  4. auto-refresh this dashboard for every, say 30 sec (increase or decrease this depending on ur requirement) 5. you can have more panels, like missing person (emp_id who entered thru gate1 but not existed thru gate2, etc)   Pls add karma / upvote to any post which helped you in any way, thanks. 
I have saved the report using no time range. The report works getting results for  the last 60 minutes as expected. My issue is when I query the testReport  I want to query with different earliest a... See more...
I have saved the report using no time range. The report works getting results for  the last 60 minutes as expected. My issue is when I query the testReport  I want to query with different earliest and latest times, so I can have two time ranges in the same chart. Something like: | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" | table id, response_time| eval lineSource = "first_day" | append [| savedsearch "testReport" earliest="12/09/2024:00:00:00" latest="12/09/2024:23:59:00" | table id, response_time| eval lineSource = "second_day"]
Hi @Alex_LC, You can try below; props.conf [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml transform.con... See more...
Hi @Alex_LC, You can try below; props.conf [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml transform.conf [transform2xml] REGEX = ([^\[]+)(\[\d+\][\r\n]+<xml>)([^\[]+)(<\/xml>[^$]+) FORMAT = <xml><time>$1</time>$3</xml> DEST_KEY = _raw  It should create a separate event for each block with time field like below; <xml><time>08:03:09</time> <tag1>some more data</tag1> <nestedTag> <tag2>fooband a bit more</tag2> </nestedTag> </xml>  
Per the Search Reference Manual, If you specify All Time in the time range picker, the savedsearch command uses the time range that was saved with the saved search. If you specify any other tim... See more...
Per the Search Reference Manual, If you specify All Time in the time range picker, the savedsearch command uses the time range that was saved with the saved search. If you specify any other time in the time range picker, the time range that you specify overrides the time range that was saved with the saved search.
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates... See more...
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates.  For starters I need to be able to run the above with a time I specify overrriding the time range above. | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" I have seen a few similar question here but I don't think it has  a working solution.