All Topics

Top

All Topics

Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in convertin... See more...
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in converting the event Time from UTC to SGT. Sample event Time is in UTC +0   2023-06-30T17:17:52Z 2023-06-30T21:29:53Z 2023-06-30T22:32:53Z 2023-07-01T00:38:53Z 2023-07-01T04:50:52Z 2023-07-01T05:53:55Z 2023-07-01T06:56:54Z 2023-07-01T07:59:52Z 2023-07-01T09:02:56Z 2023-07-01T10:05:54Z 2023-07-01T11:08:53Z 2023-07-01T12:11:53Z   End result:  UTC + 0 to SGT + 8 time. Expected output format is "%Y-%m-%d %H:%M:%S"   
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ... See more...
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ip2 Vuln5 0 ip2 Vuln6 7 | stats count(Vulnerability) as Total_Vuln, countNonZero(Score) as Total_Non_Zero_Vuln by ip Is there a function similar to countNonZero(Score) to count row number of non-zero field in Splunk? With my search above, I would like to have the following output: ip Total_Vuln Total_Non_Zero_Vuln ip1 3 2 ip2 3 1
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count ... See more...
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count where index=*  by host |fields - count But it uses index* and now we cannot use it. Will appreciate any ideas. 
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both E... See more...
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both Env tried to start the failed Splunkd.service in both windows in Ubuntu but failed From last 2 days I am trying to find the solution Note : I am using Splunk enterprise trial license      
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or... See more...
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or “Queued waiting for“   how can I solve this problem.
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B... See more...
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B feilds KM stats Count (K) Max(M) by K M i am able search both indexes  and give separate stats, If I give stats on all fields by XYZKM it is not giving any results. Note: No common feilds between both index’s.
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES... See more...
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES: [props.conf] FIELDALIAS-cve = cves as cve   I assume that in order for the data to be useful, I have to somehow break the array in such a way that each value will enter as a separate row. Is this assumption correct?  and if so, what is they way to do that in props.conf?  Thank you
Hi,  May I know, why is daily EPS on specific date get less than usually?  Is there any factor or cause to the less EPS count?  Thank you. 
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Clou... See more...
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication .   Has anyone faced this issue before? 
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search... See more...
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search:       index="tmdb_my_index" |mvexpand genre_ids{} |rename genre_ids{} as genre_id |table genre_id, id               Why genre_ids{} need the "{}"        
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hou... See more...
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hours in many environments and not a single time I was able to do so. Today, I spent many hours trying to solve it with no success. Here's the problem: I've configured everything and everything is working fine, except KV store.     [sslConfig] cliVerifyServerName = true sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = $SPLUNK_HOME/etc/your/path/your_CA.pem [kvstore] sslVerifyServerCert = true sslVerifyServerName = true serverCert = $SPLUNK_HOME/etc/your/path/your_cert.pem sslPassword = [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true [search_state] sslVerifyServerCert = true       (btw, search_state is neither listed in the docs nor does the value display in the UI, however an error is logged if it's not set). You can put the sslPassword parameter in or not, doesn't matter.   What you'll always end up when enabling sslVerifyServerCert and sslVerifyServerName is in the mongod.log:     2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** untrusted parties 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: No client certificate validation can be performed since no CA file has been provided 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter.     Splunk doesn't seem to be parsing the required parameters to Mongo as it's expecting them, let's dig a bit. This is what you'll find at startups:     2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2023-10-21T20:31:54.641Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2023-10-21T20:31:54.644Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2023-10-21T20:31:54.645Z W ASIO [main] No TransportLayer configured during NetworkInterface startup       Has anyone ever tested the TLS verification settings? All of the tlsVerify* settings are just very inconsistent in Splunk 9 and I don't see them mentioned often. Also I don't find any bugs or issues listed with KV store encryption. If you list those parameters on the docs, I expect them to work. A "ps -ef | grep mongo" will list you what options are parsed from Splunk to Mongo, formatted for readability.       mongod --dbpath=/data/splunk/var/lib/splunk/kvstore/mongo --storageEngine=wiredTiger --wiredTigerCacheSizeGB=3.600000 --port=8191 --timeStampFormat=iso8601-utc --oplogSize=200 --keyFile=/data/splunk/var/lib/splunk/kvstore/mongo/splunk.key --setParameter=enableLocalhostAuthBypass=0 --setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0 --replSet=8B532733-2DEF-42CC-82E5-38E990F3CD04 --bind_ip=0.0.0.0 --sslMode=requireSSL --sslAllowInvalidHostnames --sslPEMKeyFile=/data/splunk/etc/auth/newCerts/machine/deb-spl_full.pem --sslPEMKeyPassword=xxxxxxxx --tlsDisabledProtocols=noTLS1_0,noTLS1_1 --sslCipherConfig=ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256 --nounixsocket --noscripting     I even tried messing around with old server.conf parameters like caCertFile or sslKeysPassword, but it seems like the CA is simply never parsed as an argument. Why did no one stumple upon this?   How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.   If anyone has a working KV store TLS config, I'm eager to see that.     Skalli    
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'sourc... See more...
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'source' has ss Ann ’or css. Could you please help me construct the correct Splunk query for this?"
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log"... See more...
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log" outcome.result=SUCCESS description="User login to Okta" OR description="Authentication of user via MFA" | transaction maxspan=1h actor.alternateId, src_ip | where (mvcount(description) == 1) | where (mvindex(description, "User login to Okta") == 0)     I keep getting the error    Error in 'where' command: The arguments to the 'mvindex' function are invalid.     Please help me correct my search and explain what I am doing wrong.
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and al... See more...
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and all are implemented Windows System.   As per requirement need to be implement High availability servers Zone1 and Zone2.   please send me implemented steps for high availability servers.   Regards, Vijay
Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels ... See more...
Cheers, I am hoping to get some help on a splunk search to generate a badging report. I'll explain further. There are two types of badges students can earn, Sell & Deploy. There are three levels of badges within each badge type. The levels are Novice, Capable and Expert. Issued badges expire after one year. This means students must either renew their existing badge before the expiration date or the student can earn the next level higher badge prior to the expiration date. If a student renews their existing badge, the internal system marks the badge name as Renew_Novice, Renew_Capable, or Renew_Expert depending on which badge they earn. I've supplied some demo data to help illustrate what the data looks like. I need to generate a report that lists the student's name, email address, highest level badge name and expiration date of the highest level badge. There is no need to see lower level badges or expiration dates for lower level badges. Thank you. Each event is a student name and badge type. I onboarded the data so that the timestamp for each event ( _time) is the EarnDate of the badge The output of the Splunk report should show the following: Domain, First name, Last name, Email, Badge, ExpireDate mno.com, lisa edwards, lisa.edwards@mno.com, Sell_Expert, 12/6/23 mno.com, lisa edwards, lisa.edwards@mno.com, Deploy_Capable, 8/1/24 abc.com, allen anderson, allen.anderson@abc.com, Sell_Novice, 10/3/24 def.com, andy braden, andy.braden@def.com, Deploy_Capable, 1/3/24 ghi.com, bill connors, bill.connors@ghi.com, Sell_Novice, 10/17/23 jkl.com, brandy duggan, brandy.duggan@jkl.com, Sell_Expert, 9/5/24 Demo Data below. First name Last name Email Domain Badge EarnDate ExpireDate lisa edwards lisa.edwards@mno.com mno.com Sell_Novice 5/22/22 5/22/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Novice 5/27/22 5/27/23 andy braden andy.braden@def.com def.com Deploy_Novice 11/10/22 11/10/23 allen anderson allen.anderson@abc.com abc.com Sell_Novice 11/18/22 11/18/23 andy braden andy.braden@def.com def.com Deploy_Capable 1/3/23 1/3/24 bill connors bill.connors@ghi.com ghi.com Sell_Novice 10/17/22 10/17/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Novice 7/6/23 7/6/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Capable 7/24/22 7/24/23 lisa edwards lisa.edwards@mno.com mno.com Deploy_Capable 8/20/22 8/20/23 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Capable 8/10/23 8/10/24 brandy duggan brandy.duggan@jkl.com jkl.com Sell_Expert 9/5/22 9/5/24 allen anderson allen.anderson@abc.com abc.com Renew_Sell_Novice 10/3/23 10/3/24 lisa edwards lisa.edwards@mno.com mno.com Sell_Expert 12/6/22 12/6/23 lisa edwards lisa.edwards@mno.com mno.com Renew_Deploy_Capable 8/1/23 8/1/24
Hello Splunk Community fam! We are excited to announce the release of all chapters in the Great Resilience Quest. Explore the full path to resilience from “Foundational Visibility” to “Optimized Ex... See more...
Hello Splunk Community fam! We are excited to announce the release of all chapters in the Great Resilience Quest. Explore the full path to resilience from “Foundational Visibility” to “Optimized Experiences” and play either the Security Saga and Observability Chronicle — or both! A little introduction for the new chapters: Proactive Response Ocean (Proactive Response): Automate your processes and workflows for faster threat detection, better application performance and reduced downtime. Castle Optimized Experiences (Optimized Experience): Orchestrating across all your monitoring tools to bring the best experiences for your teams and customers. So, no other time is better than now to dive into this interactive journey offering bite-sized guidance on key use cases as you achieve greater digital resilience. By accepting this challenge, you will gain a clearer understanding of your current maturity level and find ways to strengthen your position further. Most excitingly, the quest will test your knowledge through fun quizzes and awesome prizes, including a Meta Quest and Nintendo Switch along the way!! Join the Quest Now Tips for new players: Don't hesitate! If you tackle the entire path in one go, you stand a great chance of being highlighted on our bi-weekly leaderboard and becoming eligible for Adventurer’s Bounty rewards. Tips for our current brave questers:  Ever since the quest's launch during .conf, we've been blown away by your engagement. Kudos for your progress so far! By completing the remaining two chapters, you'll secure your spot in the running for the grand prize, the Champion’s Tribute and Adventurer’s Bounty! Dive in and good luck! Check out detailed instructions HERE on how to win rewards throughout this adventure. Stay resilient and quest on!  
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn... See more...
I'm trying to look at the last result of code coverage for repo and then average that out for the team each month.  It would be something like this below but nesting a latest within an average doesn't work. | timechart span=1mon avg(latest(codecoverage.totalperc) by reponame) by team With this, I foresee an issue where the repos built every month aren't static but dynamic. I was looking at streamstats to see how the events change over time, but still can only get it grouped by reponame or by team and can't get it groupd by both | timechart span=1mon latest(codecoverage.totalperc) as now by reponame |untable _time,reponame,now |sort reponame |streamstats current=f window=1 last(now) as prev by reponame |eval Difference=now-prev | maketable _time,reponame,Difference  
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts ... See more...
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts "$100,000 and up" first instead of last.  I've created an eval that assigns a sort_order value based on the field value that orders them correctly.  However, I can't figure out how to get the column chart to sort according to that field.  This is what I'm currently trying:   | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$120,000 - $199,999",5,income=="$200,000 or more",6) | sort sort_order | chart count by income   Here's the visualization: Is there some other way to accomplish this?  
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any sugg... See more...
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any suggestions and ideas will be very helpful. Thank you Taruchit
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it poss... See more...
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it possible to sort by only last upload csv file data show?   Thanks