All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels ... See more...
Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels of badges within each badge type. The levels are Novice, Capable and Expert. Issued badges expire after one year. This means students must either renew their existing badge before the expiration date or the student can earn the next level higher badge prior to the expiration date. If a student renews their existing badge, the internal system marks the badge name as Renew_Novice, Renew_Capable, or Renew_Expert depending on which badge they earn. I've supplied some demo data to help illustrate what the data looks like. I need to generate a report that lists the student's name, email address, highest level badge name and expiration date of the highest level badge. There is no need to see lower level badges or expiration dates for lower level badges. Thank you. Each event is a student name and badge type. I onboarded the data so that the timestamp for each event ( _time) is the EarnDate of the badge The output of the Splunk report should show the following: Domain, First name, Last name, Email, Badge, ExpireDate mno.com, lisa edwards, lisa.edwards@mno.com, Sell_Expert, 12/6/23 mno.com, lisa edwards, lisa.edwards@mno.com, Deploy_Capable, 8/1/24 abc.com, allen anderson, allen.anderson@abc.com, Sell_Novice, 10/3/24 def.com, andy braden, andy.braden@def.com, Deploy_Capable, 1/3/24 ghi.com, bill connors, bill.connors@ghi.com, Sell_Novice, 10/17/23 jkl.com, brandy duggan, brandy.duggan@jkl.com, Sell_Expert, 9/5/24 Demo Data below. First name Last name Email Domain Badge EarnDate ExpireDate lisa edwards lisa.edwards@mno.com mno.com Sell_Novice 5/22/22 5/22/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Novice 5/27/22 5/27/23 andy braden andy.braden@def.com def.com Deploy_Novice 11/10/22 11/10/23 allen anderson allen.anderson@abc.com abc.com Sell_Novice 11/18/22 11/18/23 andy braden andy.braden@def.com def.com Deploy_Capable 1/3/23 1/3/24 bill connors bill.connors@ghi.com ghi.com Sell_Novice 10/17/22 10/17/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Novice 7/6/23 7/6/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Capable 7/24/22 7/24/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Capable 8/20/22 8/20/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Capable 8/10/23 8/10/24 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Expert 9/5/22 9/5/24 allen anderson allen.anderson@abc.com abc.com Renew_Sell_Novice 10/3/23 10/3/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Expert 12/6/22 12/6/23 lisa edwards lisa.edwards@mno.com mno.com Renew_Deploy_Capable 8/1/23 8/1/24
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn... See more...
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn't work. | timechart span=1mon avg(latest(codecoverage.totalperc) by reponame) by team With this, I foresee an issue where the repos built every month aren't static but dynamic. I was looking at streamstats to see how the events change over time, but still can only get it grouped by reponame or by team and can't get it groupd by both | timechart span=1mon latest(codecoverage.totalperc) as now by reponame |untable _time,reponame,now |sort reponame |streamstats current=f window=1 last(now) as prev by reponame |eval Difference=now-prev | maketable _time,reponame,Difference  
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts ... See more...
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts "$100,000 and up" first instead of last.  I've created an eval that assigns a sort_order value based on the field value that orders them correctly.  However, I can't figure out how to get the column chart to sort according to that field.  This is what I'm currently trying:   | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$120,000 - $199,999",5,income=="$200,000 or more",6) | sort sort_order | chart count by income   Here's the visualization: Is there some other way to accomplish this?  
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any sugg... See more...
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any suggestions and ideas will be very helpful. Thank you Taruchit
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it poss... See more...
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it possible to sort by only last upload csv file data show?   Thanks
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our p... See more...
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our plan to upgrade to 9.x     Can someone help intermediate versions to upgrade from 6.5.0 to 9.X   
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App lo... See more...
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App logs + Exception logs + Canceled logs + 401 mess logs) App logs:  index=app-logs "Application logs received" Exception logs:  index=app-logs "Exception logs received" Canceled logs:  index=app-logs "unpassed logs received" 401 mess logs:  index=app-logs "401 error message" Stand by count: Subtract(url - cleared log) url:   index=app-logs "url info staged" cleared log: index=app-logs "Filtered logs arranged"  
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 ... See more...
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 but just updated to 4.0.1 (v4.0.2 not available in Cloud yet) and now I am getting 403 errors. Through testing, I verified lookup endpoint is still valid, lookup shared at global level, and I even changed the permissions of the account to be sc_admin but still experiencing the same issue.  Has anyone else come across this and found a solution?  Same error no matter which lookup file I attempt to download. My test command   python3 lut.py -app search -l geo_attr_countries.csv -app search INFO:root:list of lookups to download: ['geo_attr_countries.csv'] ERROR:root:[failed] Error: Downloading file: 'geo_attr_countries.csv', status:403, reason:Forbidden, url:https://[REDACTED].splunkcloud.com:8089/services/data/lookup_edit/lookup_contents?lookup_type=csv&namespace=search&lookup_file=geo_attr_countries.csv    Python script from here
Hi - i'm not great at Splunk and am struggling with this one: I have this search result in table form Name Status Server1 OK Server2 OK Server1 Deleted Server2 OK Server3 Di... See more...
Hi - i'm not great at Splunk and am struggling with this one: I have this search result in table form Name Status Server1 OK Server2 OK Server1 Deleted Server2 OK Server3 Discovered I'd like to filter out any servers that have status deleted so for the example i'd like Name Status Server2 OK Server3 Discovered Thanks for any help.
Hi all, I been working on new rule and I just can't get it work fully. I know that there are many similar questions/answers on the forum related to this but none of them work for me. The events c... See more...
Hi all, I been working on new rule and I just can't get it work fully. I know that there are many similar questions/answers on the forum related to this but none of them work for me. The events contain field "TargetUserOrGroupName" containing an email address e.g.    TargetUserOrGroupName = testmail@gmail.com   I use split and mvindex to get only email domain out of TargetUserOrGroupName:   | eval email_domain = mvindex(split(TargetUserOrGroupName, "@"),1)     Then I want to check if "email_domain" is in lookup "free_email_domains.csv" I was able to get this easily working (partial) with sub search and inputlookup   | search email_domain=* [|inputlookup free_email_domains.csv.csv | fields email_domain] But there is issue with getting all data as sub-search returns only 10 000 entries resulting in free email domains not being in first 10k rows are not matched. The local csv file contains only column email_domains (i did added "is_free_domain" column with value "Yes" in lookup while testing but it can be removed if not needed) Any help is welcome as I cant get lookup command to work (maybe due to additional extracting of field value)  
The latest Splunk add-on for Windows is version 8.8.0 https://splunkbase.splunk.com/app/742 For customers who are still on version 4.8.2 and they need to upgrade to version 8.8.0. The splunk documen... See more...
The latest Splunk add-on for Windows is version 8.8.0 https://splunkbase.splunk.com/app/742 For customers who are still on version 4.8.2 and they need to upgrade to version 8.8.0. The splunk documentation says to upgrade to version 5.0.1 first then upgrade to 6 then going from 6 to anything greater. Since the download of the Splunk add-on for windows version 6.0 is not available from splunkbase: 1. Where can the customer gets the Splunk add-on for Windows is version 6.0 (https://splunkbase.splunk.com/app/742)?  2. Can the customer upgrade directly from version 5.0.1 to 8.8.0 without breaking anything? Note: Assuming that they already follow the steps outlined in Upgrade the Splunk Add-on for Windows  when you are upgrading from a version of the Splunk Add-on for Windows that is earlier than 5.0.0
Hi There, I have noticed that since the most recent update (that changed the UI) the manage bookmarks dashboard is displaying incorrectly when set as my home page.  Has anyone else experienced this... See more...
Hi There, I have noticed that since the most recent update (that changed the UI) the manage bookmarks dashboard is displaying incorrectly when set as my home page.  Has anyone else experienced this / know of a fix? I have also attached an image that shows what the page is meant to look like (Normal Behaviour, Bottom Half of image) and what it looks like when set as the homepage (Strange behaviour on Home Page, Top half of image). Any help/info would be appreciated, Jamie
for my mail logs in JSON format, with my splunk query I created below table mail from mail sub mail to ABC account created for A abc@a.com ABC account created for B bcd@a.com ABC ... See more...
for my mail logs in JSON format, with my splunk query I created below table mail from mail sub mail to ABC account created for A abc@a.com ABC account created for B bcd@a.com ABC account created for C efg@a.com   In my splunk query I apply dedup on "mail sub".  as you can see unique but very similar subject remains in table which I want to further become joined or considered as 1 row. my ask: what are the possible  way that I can partially match table column values and they combined into 1 .? in matching logic if somehow we can use two columns for matching (mail from and mail sub)   mail from mail sub mail to count ABC account created for A abc@a.com 3   count 3 is coming on the basis of partial match in unique subject and mail from combined.  
Good morning, I need to know what the exact search command is in order to see this parameter: Enter a search that returns all web application events that contain a prohibited status (403)
Hello everyone, we use LDAPS in Splunk to allow our employees to log in to the system (Search Heads). Is there a way for users to change their passwords when needed or after they have expired? Some u... See more...
Hello everyone, we use LDAPS in Splunk to allow our employees to log in to the system (Search Heads). Is there a way for users to change their passwords when needed or after they have expired? Some users only access Splunk and do not have any other means to update their passwords.
Hi at all, I have to configure a multisite Indexer Cluster and I have a dubt: in the Splunk architectig course, the indicated Indexer Cluster replication port was 9100. Then reading Multisite Inde... See more...
Hi at all, I have to configure a multisite Indexer Cluster and I have a dubt: in the Splunk architectig course, the indicated Indexer Cluster replication port was 9100. Then reading Multisite Indexer Cluster documentation the indicated port is 9887. What's the correct one? Can I use 9100 instead 9887 or 9100 is dedicated to other purposes? Thank you for your support. Ciao. Giuseppe
ID: rb.splunk-es.abc.com:/dev/mapper/vg_data-lv_data_opt:os_high_disk_utilization - rb.splunk-es.abc.com - High Priority - Low disk space on /data/opt at 2.00% free
I came across of running a custom python script in Splunk on the triggered events by adding the run a script action but I don't know how to do it. As the alerts are visible on Splunk I want to run a ... See more...
I came across of running a custom python script in Splunk on the triggered events by adding the run a script action but I don't know how to do it. As the alerts are visible on Splunk I want to run a script and extract those triggered alerts by running a script.
Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below... See more...
Hi All, i am using below search to monitor a status of process based on PID and usage  we have tried by stopping the service ,PID got changed how we can determine when it stopped, when using below search not getting OLD PID in the table, which was showing latest how can modify  index=Test1 host="testserver" (source=ps COMMAND=*cybAgent*) | stats latest(cpu_load_percent) as "CPU %", latest(PercentMemory) as "MEM %", latest(RSZ_KB) as "Resident Memory (KB)", latest(VSZ_KB) as "Virtual Memory (KB)",latest(PID) as "PID" ,latest(host) as "host" by COMMAND | eval Process_Status = case(isnotnull('CPU %') AND isnotnull('MEM %'), "Running", isnull('CPU %') AND isnull('MEM %'), "Not Running", 1=1, "Unknown") | table host,"CPU %", "MEM %", "Resident Memory (KB)", "Virtual Memory (KB)", Process_Status,COMMAND,PID | eval Process_Status = coalesce(Process_Status, "Unknown") | rename "CPU %" as "CPU %", "MEM %" as "MEM %" | fillnull value="N/A"
when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'        ... See more...
when i made a log for HEC with json array, im not sure what is more better way to use spl. can someone advise me please? way 1.  {host: 'test' lists : [{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123                    },                  {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    },                  {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}   way2. {host: 'test' list1 :{                    id: ' list1'                    ip: '192.168.0.1'                    device: 'laptop'                    value: 123              } list2 : {                    id: ' list2'                    ip: '192.168.0.2'                    device: 'phone'                    value: 1223                    }, list3:   {                    id: ' list3'                    ip: '192.168.0.3'                    device: 'desktop'                    value: 99                    }, ]}