All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah oke, I just did that and also this doesn't work as can me seen at: https://regex101.com/r/Papbq3/1 As to make matters worse, the 'User Account Control' field can contain multiple values when yo... See more...
Ah oke, I just did that and also this doesn't work as can me seen at: https://regex101.com/r/Papbq3/1 As to make matters worse, the 'User Account Control' field can contain multiple values when you for example disable an account and at the same time enable the ' Don't Expire Password'.  (https://regex101.com/r/OBVqt2/1)
unfortunately. While I am sure the data exists. By "data exists" do you mean some values of field "Infra Finding" are string "Yes", etc.?  Can you show sample output of earliest=-10w latest=now... See more...
unfortunately. While I am sure the data exists. By "data exists" do you mean some values of field "Infra Finding" are string "Yes", etc.?  Can you show sample output of earliest=-10w latest=now LOB=HEC search_name!=null | eval ycw = strftime(_time, "%Y_%U") | fields ycw "Infra Finding" "OS Finding" "App Finding" | stats values(*) as * by ycw  
As I said, use my rex on the unedited field, i.e. replace the three lines in the solution with just my one line.
First of all, thank you for the excellent statement of the problem with sample data and desired output. So, part of Badge is coded as level, and part of it as type.  You want to sort the badge by le... See more...
First of all, thank you for the excellent statement of the problem with sample data and desired output. So, part of Badge is coded as level, and part of it as type.  You want to sort the badge by level, take the highest level in each type and the latest expiration date.  Correct?  Here you go   | eval type = split(Badge, "_") | eval level = mvfind(mvappend("Novice", "Capable", "Expert"), mvindex(type, -1)) + 1 | fillnull level | eval type = mvindex(type, -2) | eval expire_ts = strptime(ExpireDate, "%m/%d/%y") | sort - level, expire_ts, + "Last name" "First name" | dedup Domain, "First name", "Last name", Email, type | table Domain, "First name", "Last name", Email, Badge, ExpireDate   Your sample data gives Domain First name Last name Email Badge ExpireDate jkl.com brandy duggan brandy.duggan@jkl.com Sell_Expert 9/5/24 mno.com lisa edwards lisa.edwards@mno.com Sell_Expert 12/6/23 mno.com lisa edwards lisa.edwards@mno.com Renew_Deploy_Capable 8/1/24 def.com andy braden andy.braden@def.com Deploy_Capable 1/3/24 abc.com allen anderson allen.anderson@abc.com Renew_Sell_Novice 10/3/24 ghi.com bill connors bill.connors@ghi.com Sell_Novice 10/17/23 I'm not sure why your desired output doesn't use the "Renew" prefix.  If I understand it correctly, "Renew_" means that the badge has yet to be renewed.  But if you want to get rid of it, just add:   | eval Badge = replace(Badge, "Renew_", "")   Here is an emulation that you can play with and compare with real data   | makeresults format=csv data="First name,Last name,Email,Domain,Badge,EarnDate,ExpireDate lisa,edwards,lisa.edwards@mno.com,mno.com,Sell_Novice,5/22/22,5/22/23 lisa,edwards,lisa.edwards@mno.com,mno.com,Deploy_Novice,5/27/22,5/27/23 andy,braden,andy.braden@def.com,def.com,Deploy_Novice,11/10/22,11/10/23 allen,anderson,allen.anderson@abc.com,abc.com,Sell_Novice,11/18/22,11/18/23 andy,braden,andy.braden@def.com,def.com,Deploy_Capable,1/3/23,1/3/24 bill,connors,bill.connors@ghi.com,ghi.com,Sell_Novice,10/17/22,10/17/23 brandy,duggan,brandy.duggan@jkl.com,jkl.com,Sell_Novice,7/6/23,7/6/24 lisa,edwards,lisa.edwards@mno.com,mno.com,Sell_Capable,7/24/22,7/24/23 lisa,edwards,lisa.edwards@mno.com,mno.com,Deploy_Capable,8/20/22,8/20/23 brandy,duggan,brandy.duggan@jkl.com,jkl.com,Sell_Capable,8/10/23,8/10/24 brandy,duggan,brandy.duggan@jkl.com,jkl.com,Sell_Expert,9/5/22,9/5/24 allen,anderson,allen.anderson@abc.com,abc.com,Renew_Sell_Novice,10/3/23,10/3/24 lisa,edwards,lisa.edwards@mno.com,mno.com,Sell_Expert,12/6/22,12/6/23 lisa,edwards,lisa.edwards@mno.com,mno.com,Renew_Deploy_Capable,8/1/23,8/1/24" ``` data emulation above ```    
Hi @oneemailall .. I created sample logs as a CSV file, uploaded it to Splunk, created a new field ExpireDataEpoch, sorted the sample logs using the new field ExpireDataEpoch, created the report.  i... See more...
Hi @oneemailall .. I created sample logs as a CSV file, uploaded it to Splunk, created a new field ExpireDataEpoch, sorted the sample logs using the new field ExpireDataEpoch, created the report.  i hope, you can adjust the SPL and fine-tune your requirements.    As you are a new member, let me suggest you, Karma points appreciated, thanks.  Verifying Epoch conversion: source="csv-groupby.txt" sourcetype="csv" | eval ExpireDateEpoch=strptime(ExpireDate,"%m/%d/%y") |table Domain, Firstname, Lastname, Email, Badge, ExpireDate, ExpireDateEpoch | sort ExpireDateEpoch ExpireDate Sorted Report: source="csv-groupby.txt" sourcetype="csv" | eval ExpireDateEpoch=strptime(ExpireDate,"%m/%d/%y") | sort ExpireDateEpoch |table Domain, Firstname, Lastname, Email, Badge, ExpireDate
Thank you Chris.  Do  you (or anyone else) know if this requirement is documented somewhere?  The implementation has security requirement requiring STIG compliance.  Having /tmp mounted exec (without... See more...
Thank you Chris.  Do  you (or anyone else) know if this requirement is documented somewhere?  The implementation has security requirement requiring STIG compliance.  Having /tmp mounted exec (without noexec) is a finding.  So, to go with this it needs to be documented as a vendor dependency.  But I haven't found this in anything from Splunk as a requirement.
Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels ... See more...
Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels of badges within each badge type. The levels are Novice, Capable and Expert. Issued badges expire after one year. This means students must either renew their existing badge before the expiration date or the student can earn the next level higher badge prior to the expiration date. If a student renews their existing badge, the internal system marks the badge name as Renew_Novice, Renew_Capable, or Renew_Expert depending on which badge they earn. I've supplied some demo data to help illustrate what the data looks like. I need to generate a report that lists the student's name, email address, highest level badge name and expiration date of the highest level badge. There is no need to see lower level badges or expiration dates for lower level badges. Thank you. Each event is a student name and badge type. I onboarded the data so that the timestamp for each event ( _time) is the EarnDate of the badge The output of the Splunk report should show the following: Domain, First name, Last name, Email, Badge, ExpireDate mno.com, lisa edwards, lisa.edwards@mno.com, Sell_Expert, 12/6/23 mno.com, lisa edwards, lisa.edwards@mno.com, Deploy_Capable, 8/1/24 abc.com, allen anderson, allen.anderson@abc.com, Sell_Novice, 10/3/24 def.com, andy braden, andy.braden@def.com, Deploy_Capable, 1/3/24 ghi.com, bill connors, bill.connors@ghi.com, Sell_Novice, 10/17/23 jkl.com, brandy duggan, brandy.duggan@jkl.com, Sell_Expert, 9/5/24 Demo Data below. First name Last name Email Domain Badge EarnDate ExpireDate lisa edwards lisa.edwards@mno.com mno.com Sell_Novice 5/22/22 5/22/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Novice 5/27/22 5/27/23 andy braden andy.braden@def.com def.com Deploy_Novice 11/10/22 11/10/23 allen anderson allen.anderson@abc.com abc.com Sell_Novice 11/18/22 11/18/23 andy braden andy.braden@def.com def.com Deploy_Capable 1/3/23 1/3/24 bill connors bill.connors@ghi.com ghi.com Sell_Novice 10/17/22 10/17/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Novice 7/6/23 7/6/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Capable 7/24/22 7/24/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Capable 8/20/22 8/20/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Capable 8/10/23 8/10/24 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Expert 9/5/22 9/5/24 allen anderson allen.anderson@abc.com abc.com Renew_Sell_Novice 10/3/23 10/3/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Expert 12/6/22 12/6/23 lisa edwards lisa.edwards@mno.com mno.com Renew_Deploy_Capable 8/1/23 8/1/24
I think you have the right idea by using streamstats and timechart, but you have them in the wrong order.  Try this untested SPL as an alternative to nesting latest within avg. | streamstats latest(... See more...
I think you have the right idea by using streamstats and timechart, but you have them in the wrong order.  Try this untested SPL as an alternative to nesting latest within avg. | streamstats latest(codecoverage.totalperc) as totalperc by reponame | timechart span=1mon avg(totalperc) as avgtotalperc by team  
The KOs will remain, but will become "orphans" (owned by nobody).  They can be re-assigned to another user, however.
Hi @PickleRick , i tried the below query but this is showing as normal table, i am not getting in the way i showed in the image. I just want to know whether that is doable in Splunk ??? if yes how ... See more...
Hi @PickleRick , i tried the below query but this is showing as normal table, i am not getting in the way i showed in the image. I just want to know whether that is doable in Splunk ??? if yes how can i tweak my query???? |tstats count  as Total_Messages where index=app-logs TERM(Request) TERM(received)  TERM(from) TERM(all) TERM(applications)  |appendcols [|tstats count  as App_logs where  index=app-logs TERM(Application) TERM(logs) TERM(received)] |appendcols [|tstats count  as Exception_logs where index=app-logs  TERM(Exception)  TERM(logs)  TERM(received) |appendcols [|tstats count  as Canceled_logs where  index=app-logs  TERM(unpassed) TERM( logs)  TERM(received)] |appendcols [|tstats count  as 401_mess_logs where  index=app-logs  TERM(401) TERM( error)  TERM(message)] |appendcols [|tstats count  as url where  index=app-logs TERM(url) TERM( info)  TERM(staged)] |appendcols [|tstats count  as cleared_log where  index=app-logs  TERM(Filtered)  TERM(logs)  TERM(arranged)] |table *
If we delete user without reassign KO to other user. Than what would happen with that KOs. @richgalloway 
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn... See more...
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn't work. | timechart span=1mon avg(latest(codecoverage.totalperc) by reponame) by team With this, I foresee an issue where the repos built every month aren't static but dynamic. I was looking at streamstats to see how the events change over time, but still can only get it grouped by reponame or by team and can't get it groupd by both | timechart span=1mon latest(codecoverage.totalperc) as now by reponame |untable _time,reponame,now |sort reponame |streamstats current=f window=1 last(now) as prev by reponame |eval Difference=now-prev | maketable _time,reponame,Difference  
This  eval durations = tostring(durAsSec, "duration") gives to you also days, hours and minutes. Just select those from that string.
I’m not 100% sure if this is true or not? All UFs have fishbucket db for tracking ingested files. My guess is that this db needs some converting time by time. For that reason I propose to follow the s... See more...
I’m not 100% sure if this is true or not? All UFs have fishbucket db for tracking ingested files. My guess is that this db needs some converting time by time. For that reason I propose to follow the same path with versions than you are following with enterprise. Other option is just backup your configuration then remove whole installation and start from scratch with old configuration. It’s your decision you want also save the UF’s GUID or not. r. Ismo
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts ... See more...
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts "$100,000 and up" first instead of last.  I've created an eval that assigns a sort_order value based on the field value that orders them correctly.  However, I can't figure out how to get the column chart to sort according to that field.  This is what I'm currently trying:   | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$120,000 - $199,999",5,income=="$200,000 or more",6) | sort sort_order | chart count by income   Here's the visualization: Is there some other way to accomplish this?  
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any sugg... See more...
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any suggestions and ideas will be very helpful. Thank you Taruchit
Ok. If you want to find the moment in which the PID changed, you have to carry it over to the next event (otherwise Splunk doesn't have any notion of any relationship between separate events) using t... See more...
Ok. If you want to find the moment in which the PID changed, you have to carry it over to the next event (otherwise Splunk doesn't have any notion of any relationship between separate events) using the "autoregress" command or - in a more universal manner -using streamstats | streamstats current=f last(PID) as lastPID by COMMAND This way you can see when lastPID for a given command is different than PID (mind you, Splunk by default sorts in reverse chronological order so this way you'll find the latest event before the restart; you can tweak this solution with sorting to find the first one after the restart). As a side note, don't use wildcards at the betinning of the search term unless you absolutely must.
And what have you tried so far and what is the problem with your result? To make things clear - in Splunk there is no "merging" of cells. Maybe there is a visualization which silently renders a tabl... See more...
And what have you tried so far and what is the problem with your result? To make things clear - in Splunk there is no "merging" of cells. Maybe there is a visualization which silently renders a table this way but I know of no such thing. Generally, a table has a "full grid" of results. Do you have problems with combining your searches into a single one or do you have the search but can't visualize it?
Forwarders are not as sensitive to particular upgrade path as "full" Splunk Enterprise instances are. I remember upgrading all the way straight from 7.2 to 9.0. I'm not sure about as far back as 6.5... See more...
Forwarders are not as sensitive to particular upgrade path as "full" Splunk Enterprise instances are. I remember upgrading all the way straight from 7.2 to 9.0. I'm not sure about as far back as 6.5 but with a reasonable backup of configuration I wouldn't expect many problems.
No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple ti... See more...
No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple times? Why would you even upload csv at all? In normal production environment you typically monitor log files or get events ingested in a different continuous way. Sometimes you upload samples of logs into dev/testing environments but that's a different case and there you usually don't mind the duplicates and/or you'd simply delete and recreate the index if duplication was an issue for you.