All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, My objective is to read the Cluster Shared Volume (CSV) information from a Hyper-V cluster. I initially achieved it executing a PS1 using the powershell handler [powershell], but the user runni... See more...
Hi, My objective is to read the Cluster Shared Volume (CSV) information from a Hyper-V cluster. I initially achieved it executing a PS1 using the powershell handler [powershell], but the user running UF must have access rights in the cluster so I wanted to do it differently. After some analysis I decided to move the execution of the PS1 to a scheduled task, and simply configure splunk to read the resulting log file of each execution. The following stanza is the original one running directly the ps1, and it works OK.    #CSV PerfMon Data [powershell://CSVPerfMetrics] script = . "$SplunkHome\etc\apps\Splunk_TA_microsoft-hyperv\bin\GetCSVDiskInformation.ps1" interval = 300 source = microsoft:hyperv:powershell:csvperfmetrics.ps1 sourcetype = microsoft:hyperv:perf:csv index = hyper-v disabled = 0   Then, I created the scheduled task which creates a different log file on each execution and I tried the stanza bellow without success.   [monitor://C:\Scritps\CSV\Outputs\*.log] sourcetype = microsoft:hyperv:perf:csv queue = indexQueue index = hyper-v disabled = 0   the powershell script   $csvdata = Get-ClusterSharedVolume -Cluster XXXXXXXXXX $csvs = @() foreach ( $csvitem in ($csvdata)) { $csv = [PSCustomObject]@{ VolumeName = $csvitem.Name; ID = $csvitem.Id; TotalSpaceKB = ($csvitem.SharedVolumeInfo.Partition.Size); FreeSpaceKB = ($csvitem.SharedVolumeInfo.Partition.FreeSpace); PercentFree = $csvitem.SharedVolumeInfo.Partition.PercentFree; } $csvs += $csv } $timestamp = Get-Date -Format FileDateTime $csvs | Add-Content -Path "C:\Scritps\CSV\Outputs\CSVRead_$timestamp.log"   the content of the log files look like the following:   @{VolumeName=Cluster Vol1; ID=4e902f6b-3380-499b-af29-8ff35c02e80d; TotalSpaceKB=2198752718848; FreeSpaceKB=1135195062272; PercentFree=51.62905} @{VolumeName=Cluster Vol2; ID=002b604e-1671-4054-bed2-c1f8a068b40d; TotalSpaceKB=2198752718848; FreeSpaceKB=789340848128; PercentFree=35.89948} @{VolumeName=Cluster Vol3; ID=14ce695f-f586-4d65-9e3b-6ab85524fd91; TotalSpaceKB=2198752718848; FreeSpaceKB=1692997120000; PercentFree=76.99807} @{VolumeName=Cluster Vol4 LinuxRKE; ID=c49e9b9f-d5c1-409c-8ef0-cf8613f81571; TotalSpaceKB=805283295232; FreeSpaceKB=536175865856; PercentFree=66.58227} @{VolumeName=Cluster Vol5 LinuxSecLAB; ID=f0f7e5e7-1290-4cd4-af66-3c75ffffc1ad; TotalSpaceKB=805283295232; FreeSpaceKB=774000726016; PercentFree=96.11533}   any suggestion of what could be happening? any log I could check? So far I didnt find any error message splunkd.log or any other log file, but the indexing simply does not work. Many thanks
Hi.     Wondering if someone has found a similar solution? I don't have admin rights so I don't have the ability to load js/css files to the splunk server so it all has to be within local XML code. ... See more...
Hi.     Wondering if someone has found a similar solution? I don't have admin rights so I don't have the ability to load js/css files to the splunk server so it all has to be within local XML code. Here's a simple breakdown of what I'd like my table on the dashboard to do: query | table A, B, C, D, E When column A has the string value of "problem" I want to highlight the background of column B to red otherwise Column A remains default background color.   And I need to hide column A from the end user. I found many posts (including the 2 below) but after a couple of days, no luck in finding the correct logic.. https://community.splunk.com/t5/Dashboards-Visualizations/How-do-you-highlight-a-table-cell-based-on-a-field-of-the-search/m-p/455645 https://docs.splunk.com/Documentation/Splunk/latest/Viz/TableFormatsXML#Color_palette_types_and_options Thanks in advance!
Hello, I have implemented an indexer cluster with 16 nodes and attached the license after the cluster was up and running for approx 20 days. After applying the 1TB per day license, 15 nodes have be... See more...
Hello, I have implemented an indexer cluster with 16 nodes and attached the license after the cluster was up and running for approx 20 days. After applying the 1TB per day license, 15 nodes have been connected successfully with the license manager but only 1 is having issues and generation the following alert [idx001] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times   I have tried some restarts and re-connection with the license manager from this node but still the message persists. Also in the license manager this is the only indexer missing even though the IP/DNS connectivity works   Is there any possibility to force re-connection from this indexer?
Hi Splunkers,   While splunking on bucket statistics - I've noticed that despite the fact of default 90 days set for maxHotSpanSecs, I find many buckets (which are not quarantine buckets) with time... See more...
Hi Splunkers,   While splunking on bucket statistics - I've noticed that despite the fact of default 90 days set for maxHotSpanSecs, I find many buckets (which are not quarantine buckets) with timespan (difference between latest and earliest) around 600 or even 800 days. On the other hand I can also easily find out buckets with timespan of 2 days. Does anyone knows why this could happen ? Is it possible to clearly determine rules  based on which Splunk assigns timespan for hot buckets ?   For checking out my statistics I use below search:   index=_internal component=IndexWriter bucket="hot_*" idx=* | eval bucket_timespan=latest-earliest | eval bucket_timespan=(bucket_timespan/60/60/24) | where bucket_timespan > 90 OR bucket_timespan < 3  
Hello, We've installed a linux UF on a supported distribution. Done this on multiple machines but facing an issue on a single server. The forwarder fails to start with the following error: ./splu... See more...
Hello, We've installed a linux UF on a supported distribution. Done this on multiple machines but facing an issue on a single server. The forwarder fails to start with the following error: ./splunk start Starting Splunk... Splunk> All batbelt. No tights. Checking prerequisites... Checking mgmt port [8089]: open Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-8.2.0-e053ef3c985f-linux-2.6-x86_64-manifest' All installed files intact Done All preliminary checks passed. Starting Splunk server daemon (splunkd)... ERROR: pid 90702 terminated with signal 4 Done [ OK ]   Tried installing it as a daemon as well. It doesn't report any error but the service fails to start. When starting directly from the bin folder, get the above error. Any helpful pointers here please?  Thanks.   Regards,
I would like to get event count for a particular time period for each day for a given date range (that I will select from search drop down). Time period is between 14:31 hrs and 15:01 hrs for each da... See more...
I would like to get event count for a particular time period for each day for a given date range (that I will select from search drop down). Time period is between 14:31 hrs and 15:01 hrs for each day. I am using below query. So two questions: a) can this query be optimized for better performance and b) this query gives me statistics but graph in visualization display all the time and does not adhere to my choosen time window of between 14:31 hrs and 15:01 hrs index=applogs_01 AND sourcetype=app_pmt | eval Date=strftime(_time, "%m/%d/%Y") | where (_time >= strptime(Date." "."14:59","%m/%d/%Y %H:%M") AND _time<=strptime(Date." "."15:01","%m/%d/%Y %H:%M")) | bin span=1s _time | stats count by _time
Hello,    I have a problem. An education in my education.splunk account has expired. Is there any way to reset this?   Thank you.
  BElow query shows expected statistics table in Splunk 8.2, but shows only events in Splunk 6.2.   YOUR_SEARCH | fields A_real.S*.A* | rename A_real.* as * |eval dummy=null() | foreach S* [ eval... See more...
  BElow query shows expected statistics table in Splunk 8.2, but shows only events in Splunk 6.2.   YOUR_SEARCH | fields A_real.S*.A* | rename A_real.* as * |eval dummy=null() | foreach S* [ eval dummy= if(isnull(dummy),"<<FIELD>>".":".'<<FIELD>>',dummy."|"."<<FIELD>>".":".'<<FIELD>>') ] | eval dummy=split(dummy,"|") | stats count by dummy | fields - count | eval f1= mvindex(split(dummy,"."),0),I1= mvindex(split(dummy,"."),1), Id=mvindex(split(I1,":"),0),{f1}=mvindex(split(I1,":"),1) | fields - dummy I1 f1 | stats values(*) as * by Id | lookup YOUR_LOOKUP Id | where isnotnull(Timestamp) | fields - Timestamp Please check.
Hello Splunker   usernames in my environment are shown as  : user=Company\username@AD#   where the # is a number and some users are shown as: user=Company\username$@AD#   the username has ma... See more...
Hello Splunker   usernames in my environment are shown as  : user=Company\username@AD#   where the # is a number and some users are shown as: user=Company\username$@AD#   the username has many variations" only numbers only letters combination of both   i want to extract only the username with the other letters   thanks ^_^
Hi Everyone, My business user has a requirement, he needs to run a python script from the SPLUNK web portal, the script is located in a windows server (Heavy forwarder). Due to InfoSec policy, user... See more...
Hi Everyone, My business user has a requirement, he needs to run a python script from the SPLUNK web portal, the script is located in a windows server (Heavy forwarder). Due to InfoSec policy, users are not allowed to RDP into the windows server,  so we need to find a way to call the python script from the front end.  Script Background: We will get logs from InfoSec which has URL dumps from the application gateway, SMTP gateway, and many other sources. The python script will read the URLS from the dum and create individual log files for each application (SMTP Gateway, Proxy Server, Web application gateway, Load balancer, etc..). Then the individual log files will be indexed through Splunk forwarder. The script needs to run every five hours like a scheduled job. Challenge: The terminal server in which the user will log in is already reporting to a different SPLUNK instance handled by the InfoSec team, so I cannot configure the SPLUNK universal forwarder in the terminal server to do the required job. It will be helpful if you share some ideas on how to achieve it. Let me know if you need any additional details on this.   Thank you, Ashwath Kumar R    
Hello, Please guide me, how can i use "Soc prime" tool with Splunk as i am new with Soc prime. Regards, Rahul
Hi Splunkers, I am facing the below time stamp issues. Could you please help me with this issue. 08-01-2021 22:49:25.563 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMEST... See more...
Hi Splunkers, I am facing the below time stamp issues. Could you please help me with this issue. 08-01-2021 22:49:25.563 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (800) characters of event. Defaulting to timestamp of previous event (Sun Aug 1 22:49:25 2021).   [props] TIME_FORMAT = %m-%d-%Y%H:%M:%S.%3Q SHOULD_LINEMERGE = False is_valid = True maxDist = 9999 BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 800 NO_BINARY_CHECK = true category = Custom
I  need to  have display  splunk  dashboard  as pop  window when we click  as drilldown.   For example index=_internal |stats count by source, sourcetype when click on the  table it  should pop  u... See more...
I  need to  have display  splunk  dashboard  as pop  window when we click  as drilldown.   For example index=_internal |stats count by source, sourcetype when click on the  table it  should pop  up  a panel containing  the below search   index_internal |table sourcetype
I had the Splunk Cloud Gateway installed before it was standard (Splunk 7.x) and working, with alerts and dashboards accessible from my phone.  I believe during a license update that stripped my acco... See more...
I had the Splunk Cloud Gateway installed before it was standard (Splunk 7.x) and working, with alerts and dashboards accessible from my phone.  I believe during a license update that stripped my account (new terms allows for only one account, so admin) broke it (stopped getting alerts).  Since its a home lab and not prod I didn't dig into it. Now that I am digging into it, the gateway dashboard is showing this:   SPL:  index=_internal source=*cloud* ERROR AND NOT SUBSCRIPTION Shows this: I can register my device, but it can't see any dashboards, it seems to time out. There seems to be a vacuum in google as to troubleshooting this except talk of using proxies.  I am not running a proxy. What could the issue be?
I have uninstalled add-on Splunk_TA_jmx (by removing the application directory and restarting splunk) but I am still getting data of sourcetype=jmx into the main index every 60 seconds. where is thi... See more...
I have uninstalled add-on Splunk_TA_jmx (by removing the application directory and restarting splunk) but I am still getting data of sourcetype=jmx into the main index every 60 seconds. where is this data coming from?   I would like to reinstall the add-on, after I get everything back to the pre-install
I need to mask data for fields values of <ab:Nm>, <ab:StrtNm>, <ab:PstCd>, <ab:TwnNm>, <ab:CtrySubDvsn>, <ab:Ctry>,  <ab:Ustrd> from below events: First Event: <ab:Dbtr> <ab:Nm>ACB L OESGT AGBDH</... See more...
I need to mask data for fields values of <ab:Nm>, <ab:StrtNm>, <ab:PstCd>, <ab:TwnNm>, <ab:CtrySubDvsn>, <ab:Ctry>,  <ab:Ustrd> from below events: First Event: <ab:Dbtr> <ab:Nm>ACB L OESGT AGBDH</ab:Nm> <ab:PstlAdr> <ab:StrtNm>12345 BELBON WAY</ab:StrtNm> <ab:PstCd>45352-1242</ab:PstCd> <ab:TwnNm>CUBA TWP</ab:TwnNm> <ab:CtrySubDvsn>AB</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday </ab:Ustrd> </ab:RmtInf> Second Event: <ab:Dbtr> <ab:Nm>AMIRA S ELHASSAN</ab:Nm> <ab:PstlAdr> <ab:StrtNm>5267 APHEG DR</ab:StrtNm> <ab:PstCd>54672-1080</ab:PstCd> <ab:TwnNm>ANTARTICA BEACH</ab:TwnNm> <ab:CtrySubDvsn>EF</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday, birthday party on me</ab:Ustrd> </ab:RmtInf> Third event: <ab:Dbtr> <ab:Nm>ALYSSA FOSTER</ab:Nm> <ab:PstlAdr> <ab:StrtNm>529833 PRIME BILL CT</ab:StrtNm> <ab:PstCd>45673-8297</ab:PstCd> <ab:TwnNm>BEDERICK</ab:TwnNm> <ab:CtrySubDvsn>MD</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday my love, have fun.</ab:Ustrd> </ab:RmtInf> Can someone help me with the regex? 
Hi all.  We received a bulletin that our UF certificates were expiring.  I downloaded the credentials package and installed them per documentation on my deployment server using <...install app...-upd... See more...
Hi all.  We received a bulletin that our UF certificates were expiring.  I downloaded the credentials package and installed them per documentation on my deployment server using <...install app...-update 1...> That appears to have updated the APPS 100_xxxx_splunkcloud folder/files but my question is: 1.  Doesn't the DEPLOYMENT-APPS 100_xxxx_splunkcloud folder/files need to be updated to get the updated credentials to all of my UF's? 2.  How can I validate the new certificate date on my Window's UF's and on my HF's? Thanks in advance.
Hello Splunkers. i made a splunk search to count the number of blocked URLs as a single value in a one day span of 3days period of search.   here's my search:  index=proxy action=blocked | bin _... See more...
Hello Splunkers. i made a splunk search to count the number of blocked URLs as a single value in a one day span of 3days period of search.   here's my search:  index=proxy action=blocked | bin _time span=1d | stats count(http_url) by _time   and here's the results:   i want to show it in thousands of tried, i tried this this search but the results doesnt appear:  index=proxy action=blocked | bin _time span=1d | eval url_count= http_url/1000 | stats count(url_count) by _time   but there are no results     please help me with it,  thanks ^_^
Hi, I have written a script which runs for every after 1 hr, here the 24 hr window is from 07am to next day 06:00am My requirement is to provide a monthly count of the report but should only consid... See more...
Hi, I have written a script which runs for every after 1 hr, here the 24 hr window is from 07am to next day 06:00am My requirement is to provide a monthly count of the report but should only consider the last log file 06:00 am which contains all the updated information for the day index=XX host=XX source="abc_YYYYMMDD.csv" | dedup _raw | fields source A B C | replace abc*.csv WITH * in source | eval source=strftime(strptime(source,"%Y%m%d"),"%d-%B-%Y") | eval jobs=A+B+C | dedup jobs | stats count(jobs) by source - this display all the events but I want to only the last log file count. 
Hello all, I'm trying to create an alert for Successful Brute Force Attempts using the Authentication Data Model. Currently, I'm doing this:     | tstats summariesonly=true count as success FROM ... See more...
Hello all, I'm trying to create an alert for Successful Brute Force Attempts using the Authentication Data Model. Currently, I'm doing this:     | tstats summariesonly=true count as success FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="success" BY _time span=60m sourcetype Authentication.src Authentication.user | join max=0 _time, sourcetype, Authentication.src, Authentication.user [| tstats summariesonly=true count as failure FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="failure" BY _time span=60m sourcetype Authentication.src Authentication.user] | search success>0 AND failure>0 | where success<(failure*.05) | rename Authentication.* as * | lookup dnslookup clientip as src OUTPUT clienthost as src_host | table _time, user, src, src_host, sourcetype, failure, success     It aggregates the successful and failed logins by each user for each src by sourcetype by hour. Then it returns the info when a user has failed to authenticate to a specific sourcetype from a specific src at least 95% of the time within the hour, but not 100% (the user tried to login a bunch of times, most of their login attempts failed, but at least one of the login attempts succeeded). What everyone else would like me to do is look for the traditional pattern for finding Successful Brute Force Attempts - many failed logins followed by a successful one. I can't seem to find an efficient way of doing that with the Splunk Authentication Data Model. We get a over a million authentication attempts per day and just under 100,000 of those are failures. We can't afford Splunk Enterprise Security (Higher Ed) and my understanding is that this is more or less how ES Successful Brute Force Detection works anyways. Do any of you have any advice on finding a pattern of a large number of something (failures, whatever) followed by something else (success, etc.)?