All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes ... See more...
Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes does not contain this parse only the _time field. What may cause this issue? In addition, I am not sure but I have found something related to "DATETIME_CONFIG = /etc/datetime.xml" might be a good point not much on the internet that explain pretty well how to resolve this. Would appreciate your help here  
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i w... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding ... See more...
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding the archived data to GCS Bucket.
Splunk app for AWS security dashboard shows '0' data, need help to fix this issue   when I try to run/edit query shows error as below   
Hello to all dear friends. Does Splunk have settings to only serve on http version 2.0? Thank you in advance
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. A... See more...
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. After deployment, the memory usage was coming to high on each server and one of the server went down because of memory leak. CPU usage is also high as expected when the splunk process is running. For example, one of the server's CPU usage increased 30% and consumed 5.7GB memory out of 14GB after the splunk process up. How can I reduce the resource usage?
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in convertin... See more...
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in converting the event Time from UTC to SGT. Sample event Time is in UTC +0   2023-06-30T17:17:52Z 2023-06-30T21:29:53Z 2023-06-30T22:32:53Z 2023-07-01T00:38:53Z 2023-07-01T04:50:52Z 2023-07-01T05:53:55Z 2023-07-01T06:56:54Z 2023-07-01T07:59:52Z 2023-07-01T09:02:56Z 2023-07-01T10:05:54Z 2023-07-01T11:08:53Z 2023-07-01T12:11:53Z   End result:  UTC + 0 to SGT + 8 time. Expected output format is "%Y-%m-%d %H:%M:%S"   
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ... See more...
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ip2 Vuln5 0 ip2 Vuln6 7 | stats count(Vulnerability) as Total_Vuln, countNonZero(Score) as Total_Non_Zero_Vuln by ip Is there a function similar to countNonZero(Score) to count row number of non-zero field in Splunk? With my search above, I would like to have the following output: ip Total_Vuln Total_Non_Zero_Vuln ip1 3 2 ip2 3 1
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count ... See more...
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count where index=*  by host |fields - count But it uses index* and now we cannot use it. Will appreciate any ideas. 
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both E... See more...
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both Env tried to start the failed Splunkd.service in both windows in Ubuntu but failed From last 2 days I am trying to find the solution Note : I am using Splunk enterprise trial license      
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or... See more...
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or “Queued waiting for“   how can I solve this problem.
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B... See more...
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B feilds KM stats Count (K) Max(M) by K M i am able search both indexes  and give separate stats, If I give stats on all fields by XYZKM it is not giving any results. Note: No common feilds between both index’s.
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES... See more...
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES: [props.conf] FIELDALIAS-cve = cves as cve   I assume that in order for the data to be useful, I have to somehow break the array in such a way that each value will enter as a separate row. Is this assumption correct?  and if so, what is they way to do that in props.conf?  Thank you
Hi,  May I know, why is daily EPS on specific date get less than usually?  Is there any factor or cause to the less EPS count?  Thank you. 
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Clou... See more...
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication .   Has anyone faced this issue before? 
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search... See more...
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search:       index="tmdb_my_index" |mvexpand genre_ids{} |rename genre_ids{} as genre_id |table genre_id, id               Why genre_ids{} need the "{}"        
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hou... See more...
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hours in many environments and not a single time I was able to do so. Today, I spent many hours trying to solve it with no success. Here's the problem: I've configured everything and everything is working fine, except KV store.     [sslConfig] cliVerifyServerName = true sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = $SPLUNK_HOME/etc/your/path/your_CA.pem [kvstore] sslVerifyServerCert = true sslVerifyServerName = true serverCert = $SPLUNK_HOME/etc/your/path/your_cert.pem sslPassword = [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true [search_state] sslVerifyServerCert = true       (btw, search_state is neither listed in the docs nor does the value display in the UI, however an error is logged if it's not set). You can put the sslPassword parameter in or not, doesn't matter.   What you'll always end up when enabling sslVerifyServerCert and sslVerifyServerName is in the mongod.log:     2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** untrusted parties 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: No client certificate validation can be performed since no CA file has been provided 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter.     Splunk doesn't seem to be parsing the required parameters to Mongo as it's expecting them, let's dig a bit. This is what you'll find at startups:     2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2023-10-21T20:31:54.641Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2023-10-21T20:31:54.644Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2023-10-21T20:31:54.645Z W ASIO [main] No TransportLayer configured during NetworkInterface startup       Has anyone ever tested the TLS verification settings? All of the tlsVerify* settings are just very inconsistent in Splunk 9 and I don't see them mentioned often. Also I don't find any bugs or issues listed with KV store encryption. If you list those parameters on the docs, I expect them to work. A "ps -ef | grep mongo" will list you what options are parsed from Splunk to Mongo, formatted for readability.       mongod --dbpath=/data/splunk/var/lib/splunk/kvstore/mongo --storageEngine=wiredTiger --wiredTigerCacheSizeGB=3.600000 --port=8191 --timeStampFormat=iso8601-utc --oplogSize=200 --keyFile=/data/splunk/var/lib/splunk/kvstore/mongo/splunk.key --setParameter=enableLocalhostAuthBypass=0 --setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0 --replSet=8B532733-2DEF-42CC-82E5-38E990F3CD04 --bind_ip=0.0.0.0 --sslMode=requireSSL --sslAllowInvalidHostnames --sslPEMKeyFile=/data/splunk/etc/auth/newCerts/machine/deb-spl_full.pem --sslPEMKeyPassword=xxxxxxxx --tlsDisabledProtocols=noTLS1_0,noTLS1_1 --sslCipherConfig=ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256 --nounixsocket --noscripting     I even tried messing around with old server.conf parameters like caCertFile or sslKeysPassword, but it seems like the CA is simply never parsed as an argument. Why did no one stumple upon this?   How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.   If anyone has a working KV store TLS config, I'm eager to see that.     Skalli    
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'sourc... See more...
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'source' has ss Ann ’or css. Could you please help me construct the correct Splunk query for this?"
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log"... See more...
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log" outcome.result=SUCCESS description="User login to Okta" OR description="Authentication of user via MFA" | transaction maxspan=1h actor.alternateId, src_ip | where (mvcount(description) == 1) | where (mvindex(description, "User login to Okta") == 0)     I keep getting the error    Error in 'where' command: The arguments to the 'mvindex' function are invalid.     Please help me correct my search and explain what I am doing wrong.
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and al... See more...
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and all are implemented Windows System.   As per requirement need to be implement High availability servers Zone1 and Zone2.   please send me implemented steps for high availability servers.   Regards, Vijay