All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @_pravin , check if you have enough disk space in all your indexers. If you have, open a case to Splunk Support. Ciao. Giuseppe
You may use /opt/splunk/bin/genRootCA.sh to regenerate ca.pem & cacert.pem
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have... See more...
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have two server associated. In addition, we have a cluster manager, search heads cluster, development search head, development indexer and deployment server. All instances have splunk 8.1.14. I see the Splunkd Thread Activity looking for possible clues to a problem we have when indexing data in production. The problem is that sometime we don't have some events in production. Not matter what sourcetype or method used to ingest data. We suspect that can be problem of indexers or of index where data is ingested.
yes it can. and no it wont. because you wont be extracting fields at index time if you dont use indexed_extractions=json.  Splunk is very good at applying only what config matters. So when in doubt ... See more...
yes it can. and no it wont. because you wont be extracting fields at index time if you dont use indexed_extractions=json.  Splunk is very good at applying only what config matters. So when in doubt send them to both idx and sh. Splunk usually just figures it out.  The duplicate extractions issues happens when you do BOTH index time (indexed_extractions=json) AND Search time (kv_mode=json) in your props.conf config. Thats when they may collide, and is why i say i ALMOST never enable indexed_extractions=json as I would always prefer review of search time extract then only move key fields i need to index time for performance reasons. 
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of ... See more...
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of the indexers, and it's 'checking for restart' for the other two indexers. When I checked the last change date for all the indexers, only one of them has been updated and the other 2 are not. But it's opposite to what is shown in the UI of the CM.   Regards, Pravin    
To configure NetScaler to pass the source IP, you'll need to enable the Use Source IP (USIP) mode. Here are the steps to do this: Log in to NetScaler: Open your NetScaler management interface. ... See more...
To configure NetScaler to pass the source IP, you'll need to enable the Use Source IP (USIP) mode. Here are the steps to do this: Log in to NetScaler: Open your NetScaler management interface. Navigate to Load Balancing: Go to Traffic Management > Load Balancing > Services. Open a Service: Select the service you want to configure. Enable USIP Mode: In the Advanced Settings, find the Service Settings section and select Use Source IP Address. This will ensure that NetScaler uses the client's IP address for communication with the backend servers. Would you like more detailed instructions or help with another aspect of your setup?
Ok here my doubt is... Can one app which contains props.conf (with kv_mode=json) be distributed to both indexers and search heads? Because will it may lead to duplication of fields or events by any c... See more...
Ok here my doubt is... Can one app which contains props.conf (with kv_mode=json) be distributed to both indexers and search heads? Because will it may lead to duplication of fields or events by any chance? Index time and search time extraction I am asking about. Is it ok?
Are you trying to find errors send email *from* Splunk or using Splunk to find any email sending errors?  I'll assume the former for now. Splunk logs email it sends in python.log.  Searching for "se... See more...
Are you trying to find errors send email *from* Splunk or using Splunk to find any email sending errors?  I'll assume the former for now. Splunk logs email it sends in python.log.  Searching for "sendemail" should find them.  The only errors you're likely to find are failures to pass the email to the SMTP server.  Any failures beyond that point would be sent as mailer-daemon messages to the sending mailbox.  You'll only be able to search for those if you are Splunking the mailbox (not common).
It seems the company firewall blocked outbound traffic to 8088. Issue explained
Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer ... See more...
Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer for SHC or DS/Directly, if standalone) 
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have... See more...
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have two server associated. In addition, we have a cluster manager, search heads cluster, development search head, development indexer and deployment server. All instances have splunk 8.1.14. I see the Splunkd Thread Activity looking for possible clues to a problem we have when indexing data in production. The problem is that sometime we don't have some events in production. Not matter what sourcetype or method used to ingest data. We suspect that can be problem of indexers or of index where data is ingested.
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are ... See more...
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are forwarding?
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals ... See more...
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals hould I place props.conf (here I will keep kv_mode = json) and then deploy it to search heads? Sorry I am asking so many questions literally I am confused here...
kv_mode=json would be in the sourcetype on the Search Heads.  Ingest_Eval will be props/transforms on indexers.  Technically you can just put all the configs everywhere and splunk will sort it out.
Hi, after your basic search you can create a table. Then you can use replace  like | replace blub with 1blub ... Then you create a chart  and do a rename after.  
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for ... See more...
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for the same patter of Mail Delivery Subsystem that happens when sending an email from Gmail (or any other) to a non-existing mail. Bud didn't find anything in _internal index, nor with a rest to saved search and index=mail is empty. Amy idea?
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers w... See more...
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers where uf installed and we have DS which pushes apps to Deployer and manager. From there it will push to peer nodes and search heads. Where I can exactly this attribute (kv_mode=json)? We have props and transforms in manager apps where it will be pushed to all peers. I don't see any props in search heads.
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information... See more...
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information within business transactions screens.
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, thi... See more...
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, this looks like some json thru syslog adventure. yum!  <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} The easiest way in syslog is to send kvpairs in the log events instead of json, like foo=bar bar=baz,   <12>Nov 12 20:15:12 localhost whatever: a=b c=d   then splunk can just pick out all the kv pairs automagically, instead of having to parse json to do the same thing. Many apps have this option in their logger.  might get lucky. JSON provides no value here if we have to live with whatever pipeline is sending this syslog filled with json stuff.  If the app cant change its format, or the ingestion path cant be reviewed, then the next option is surgery on the inbound event, where Splunk config is used to parse out the syslog facility, timestamp (which doesnt even have the year or precision timestamp) the host into indexed fields, then remove this part of the event:   <12>Nov 12 20:15:12 localhost whatever: data=   so all thats left when splunk indexes the _raw event is:   {"a":"b","c":"d"}   Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.  See this awesome conf talk on the power of splunk ingest_eval  https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Then these examples on github from the con talk  https://github.com/silkyrich/ingest_eval_examples/blob/master/default/props.conf https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf or look into splunk edge processor or ingest processor if you are a cloud customer.  Options after that, are reviewing the ingestion process and move away from syslog to more modern collection to get better data like iso timestamps with timezone, etc. but whatever you use, still needs to be able to format the event properly if you want the benefit of structured data format.  I strongly suggest you consult with your Splunk Sales Engineer on the customer's account so that an expert or partner can help them achieve this and you can learn by working with them.  Is this a onprem enterprise user? or Cloud user?
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your set... See more...
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your setups and os, hw knowledge.