All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the ... See more...
Hi All, We have approximately 100 Splunk Universal Forwarders (UFs) installed at a remote site, and we're interested in setting up a Heavy Forwarder (HF) at that location to forward the data to the indexers from the UFs. Additionally, we plan to deploy the deployment server on the same virtual machine (VM). Based on the documentation, it appears that a deployment server can be co-located with another Splunk Enterprise instance as long as the deployment client count remains at or below 50. We would like to better understand the rationale behind this limitation of 50 clients and why it is not possible to manage more than 50 clients by adding another component of Splunk Enterprise ?   Regards VK
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else aw... See more...
I'm looking for the regular expression wizards out there. I need to do a rex with two capture groups: one for name, and one for value. I plan to use the replace function, and throw everything else away but those two capture groups (e.g., "\1: \2"). Here are some sample events. name="Building",value="Southwest",descendants_action="success",operation="OVERRIDE" name="Building",value=["Northeast","Northwest"],descendants_action="failure",operation="OVERRIDE" name="Building",value="Southeast",descendants_action="success",operation="OVERRIDE" name="Building",value="Northwest" name="Building",value="Northwest",operation="OVERRIDE" So far I just have this. ^name=\"(.*)\",value=\[?(.*)\]? Any ideas?
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a tes... See more...
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a test to convert the data to json but I am still seeing mix matches between the log file size and the data being ingested into Splunk.  I am checking on the daily ingestion vs daily log size 
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timecha... See more...
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timechart distinct_count(LoginUserID) partial=false This works and the resulting graph appears to be correct for 120 mins resulting in 5min time buckets.  Then if I shorten the time period down to 60 mins resulting in 1 min buckets then I have a question. In the 120 min graph with 5 min buckets @ 6:40-6:45 I have 318 Distinct Users but in the 90 min graph with 1 min buckets each 1 min bucket has 136, 144, 142, 131, 117 Distinct Users. I understand that a user can be active one minute and inactive the next min or two and then active again on the 4th/5th min which is what is happening? My question is how to get this to show across the one minute bin's users that were active in the previous 5, 1 min buckets resulting in a # that represents users that are logged in and not just active ? I believe I can add minspan=5min as a kludge but am wondering if there is a way to get this do what im trying to show at the 1min span ? I believe what I need to do is run two queries the first one as is above, then use an append that will query for  events from -5min to -10min.   But, from what I have been trying it either is not working or not doing it correctly. Basically im trying to find those userID's that are active in the first time bucket (1 min) that were also active in the previous time bucket(s) then do a distinct_count(..) on the usersID's collected from both queries ?
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search he... See more...
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search heads on failure. Unfortunately with Autoscaling, it does NOT re-use the IP of the failed instance on the new instance, probably due to other use cases of up- and down-scaling. So new replacement instances will always have new/different IPs than that of the failed instance. Starting with a healthy cluster with an elected search head captain and RAFT running, I terminated one search head. During the minute or two that it took AWS Autoscaling to replace the search head instance, RAFT stopped and there was no captain. I was then unable to add a NEW third search head to the cluster. OK, so then I created a similar scenario but this time had Autoscaling issue the commands to force one of the remaining two search heads to be an UN-ELECTED static captain - and then confirmed this had worked; I had two search heads, one being a captain. In the Splunk documentation, it mentions using a Static Captain for DR. However, when I again tried to add the new instance as the third search head, I again received the error that RAFT was not running, there was no cluster, and therefore the member could not be added! So what is Splunk's recommendation for Disaster Recovery in this situation? I understand this is a chicken-and-egg scenario, but how are you expected to recover if you can't get a third search head in place in order TO recover? It seems counter-intuitive that Splunk would disallow adding a third search head, especially with the static search head captain in place. There are some configurable timeout parameters in server.conf in the [shcluster] stanza - would increasing any of these values keep the SHC in place long enough for Autoscaling to replace that third search head instance such that it can then join the SHC? If so, which timeouts should I use, and which values would be appropriate that they wouldn't interfere with the day-in, day-out usage? I'm stuck on this and haven't been able to progress any further. Any and all help is greatly appreciated!
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3  ... See more...
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3                               string4 |table job_no, field2, field4|dedup, job_no, field2 |stats count dc(field4) AS dc_field4 by job_no |eval calc=dc_field4 * count produces:- job_no                                       field2                                        dc_field4                              calc 131 6 2 12 132 6 2 12 This all works fine.  The problem is that I also want to include the strings (string1,string2,string3,string4) in my table.  Like this:- job_no                                                                   field4                                                               field2       dc_field4     calc 131 string1, string2 6 2 12 132 string3, string4 6 2 12   Any help would be greatly appreciated,
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstyp... See more...
I created a dashboard with a query looks like this :  index=cbclogs sourcetype = cbc_cc_performance source="/var/log/ccccenter/performancelog.log" Company IN ($company_filter$) LossType IN ($losstype_filter$) QuickClaimType IN ($QCT_filter$) |eval minsElapsed=round(secondsElapsed/60,0)| timechart median(minsElapsed) by LOB. Suppose LOB has string values like :  "A", "B", "C", "D" ,"E","F","G" ,"H", currently , all values will be shown in the Y axis on the right side , how can I combine "A","B","C" as "A" , "D","E","F" as "E" and "G","H" as "G", so , the right side Y axis has only three values and won't affect the correctness of the dashboard. Actually , I am not sure whether should I call this right side colourful column Y axis.           Thanks a lot !
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |tab... See more...
Missing titleband from the search It seems like its a subsearch of ldap query or something if I do Get-ADUser in powershell its missing from there too here is what we had from event logs : |table titleband adminDescription cn co company dcName department description displayName division eventtype georegion givenName host locationCode mail mailNickname sAMAccountName title userAccountControl userAccountPropertyFlag userPrincipalName
We want to implement MFA for login to our Splunk enterprise serves. currently we are using LDAP authentication method.  is it possible to implement MFA on top of LDAP? 
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate ... See more...
Hello, Didn't get any hits on this issue so starting a new thread, and didn't find any previous defect reported on this possible issue. We are running SE 9.5.0.  There is a need to get an accurate number of the exact number of duplicate events indexing to Splunk.....ended up having to use the transaction command. To do this, ran a search for a 24-hour period that ran in about 16 minutes to get the total number of events, and data set size. Then ran the same search with the dedup command to reduce out all the duplicate events.....  | dedup _time _raw The problem is the dedup command commences to use up all the available memory until Splunk kills the search with the message "Your search has been terminated.  This is most likely due to an out of memory condition." The data set for the 24-hour search period is 13 billion+ events, and the data set size is 1.6 TB's. The 3 SH's each have 374 GB's of memory. Those are usually at 14%-16% memory usage.  Using the higher number of 16% that leaves 314 GB's of memory available when the search starts. The search with dedup commences to use that 314 GB's of available memory over about a 1 hour and 40 minute period until all used up, and Splunk kills the search when memory is near 100% utilized.       So these are the 2 reasons that dedup could be using all remaining available memory:  The dedup command is designed this way to use a larger and larger amount of memory as the data set increases in size / number of events. There is a defect with the dedup  command in SE 9.5.0 Can someone explain which of the 2 reasons it is ?  
Hi, Need below search into a web datmodel search  index=es_web action=blocked host= * sourcetype= * | stats count by category | sort 5 -count thanks
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to... See more...
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to change the value Mik with another value. Can I use python Api ? Splunk Cli ? Rest api ? It seem's to be a lot of solutions but for the moment I can't find one. Thanks
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to o... See more...
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to observe such behavior. I suspected the SHOULD_LINEMERGE setting, which can merge events with similar timestamps, but now I turned it off. Below the Splunk search window I attached a result from regex101.com that shows the working of my LINE_BREAKING option. I also tried to use the default ([\r\n]+) regex, but it did not work either. What did I miss in my approach? My props.conf for the desired sourcetype looks like: [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+)|(\n+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i ne... See more...
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i need to create a search/alert that notifies me if a host on this lookup stops sending data to index=abc I was trying something like this  search below, but now having much luck: | tstats count where index=abc host NOT [| inputlookup hosts.csv] by host   The lookup called hosts.csv is formatted with the column name being host, for example like:   host hostname101 hostname102 hostname103 hostname104  
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be o... See more...
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be okay. We do not see the evens in the index though. Configured inputs.conf in the /local folder as follows: [tcp-ssl://6514] disabled = false [SSL] requireClientCert = false serverCert = /opt/splunk/etc/auth/custom/combined.cer sslVersions = tls1.2 cipherSuite = AES256-SHA We have the input setup in the web interface and have the correct index and source defined. No events coming in though. I've seen several articles from multiple years back on configuring this. The TLS handshake works, what are we missing? Thanks in advance! FYI: Tried this over UDP using a non TLS input and the data comes in fine, but when we try with SSL it never shows up in the index.
Hi Team, We need your help in reviewing Data Collectors created in AppDynamics for .NET WCF service. We need your guidance as newly Data Collectors are not working. Thank you, Deepak Paste
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"somethi... See more...
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"something_long":"long json"}\n","stream":"stderr","docker":{"container_id":"d48887cdb80442f483a876b9f2cd351ae02a8712ec20960a9dc66559b8ccce87"},"kubernetes":{"container_name":"container","namespace_name":"namespace","pod_name":"service-576c4bcccf-75gzq","container_image":"art.com:6500/3rdparties/something/something-agent:1.6.0","container_image_id":"docker-pullable://art.com:6500/3rdparties/something/something-agent@sha256:02b855e32321c55ffb1b8fefc68b3beb6","pod_id":"3c90db56-3013a73e5","host":"worker-3","labels":{"app":"image-service","pod-template-hash":"576c4bcccf","role":"image-ervice"}},"level":"info","ts":1689074778.913063,"caller":"peermgr/peer_mgr.go:157","msg":"Not enough connected peers","connected":0,"required":1,"@timestamp":"2023-07-11T11:26:19.133326179+00:00"}} As you can see the timestamp is at the end. So I have setup my props.conf for the following: [elastic_logs] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = make sure timestamp is taken pulldown_type = 1 TIME_PREFIX = "@timestamp":\s*" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 1000   I can see the timestamp in splunk entries, but that is all I can see now. all the other fields are not displayed. what am I doing wrong?
Hello, We have few apps that are no longer needed in our on premise environment. We maintain git repo for configs. Can anyone please help me with the steps to uninstall/remove the app.     Thanks
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 co... See more...
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 counts and is red, however i want to return the status with the red not just the number. I can return the status with  | stats status   but it is in black and white, any help is appreciated. 
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search... See more...
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search" info="completed"