All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer ... See more...
Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer for SHC or DS/Directly, if standalone) 
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have... See more...
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have two server associated. In addition, we have a cluster manager, search heads cluster, development search head, development indexer and deployment server. All instances have splunk 8.1.14. I see the Splunkd Thread Activity looking for possible clues to a problem we have when indexing data in production. The problem is that sometime we don't have some events in production. Not matter what sourcetype or method used to ingest data. We suspect that can be problem of indexers or of index where data is ingested.
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are ... See more...
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are forwarding?
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals ... See more...
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals hould I place props.conf (here I will keep kv_mode = json) and then deploy it to search heads? Sorry I am asking so many questions literally I am confused here...
kv_mode=json would be in the sourcetype on the Search Heads.  Ingest_Eval will be props/transforms on indexers.  Technically you can just put all the configs everywhere and splunk will sort it out.
Hi, after your basic search you can create a table. Then you can use replace  like | replace blub with 1blub ... Then you create a chart  and do a rename after.  
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for ... See more...
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for the same patter of Mail Delivery Subsystem that happens when sending an email from Gmail (or any other) to a non-existing mail. Bud didn't find anything in _internal index, nor with a rest to saved search and index=mail is empty. Amy idea?
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers w... See more...
It is splunk enterprise not cloud. Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.   Where I need to give kv_mode? We have syslog servers where uf installed and we have DS which pushes apps to Deployer and manager. From there it will push to peer nodes and search heads. Where I can exactly this attribute (kv_mode=json)? We have props and transforms in manager apps where it will be pushed to all peers. I don't see any props in search heads.
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information... See more...
Thank you.  I think you just confirmed I would need to write code. It is strange that AppD does have this information (it can split on ThreadId/ThreadName), but it won't let you see this information within business transactions screens.
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, thi... See more...
If they want to parse JSON automatically, the sender agent/mechanism must send full formed JSON events.  Review the event with them...its not JSON. its JSON in an unstructured log line. In fact, this looks like some json thru syslog adventure. yum!  <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} The easiest way in syslog is to send kvpairs in the log events instead of json, like foo=bar bar=baz,   <12>Nov 12 20:15:12 localhost whatever: a=b c=d   then splunk can just pick out all the kv pairs automagically, instead of having to parse json to do the same thing. Many apps have this option in their logger.  might get lucky. JSON provides no value here if we have to live with whatever pipeline is sending this syslog filled with json stuff.  If the app cant change its format, or the ingestion path cant be reviewed, then the next option is surgery on the inbound event, where Splunk config is used to parse out the syslog facility, timestamp (which doesnt even have the year or precision timestamp) the host into indexed fields, then remove this part of the event:   <12>Nov 12 20:15:12 localhost whatever: data=   so all thats left when splunk indexes the _raw event is:   {"a":"b","c":"d"}   Which will allow kv_mode=json to do its thing. you never should go straight to indexed_extractions=json.  See this awesome conf talk on the power of splunk ingest_eval  https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Then these examples on github from the con talk  https://github.com/silkyrich/ingest_eval_examples/blob/master/default/props.conf https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf or look into splunk edge processor or ingest processor if you are a cloud customer.  Options after that, are reviewing the ingestion process and move away from syslog to more modern collection to get better data like iso timestamps with timezone, etc. but whatever you use, still needs to be able to format the event properly if you want the benefit of structured data format.  I strongly suggest you consult with your Splunk Sales Engineer on the customer's account so that an expert or partner can help them achieve this and you can learn by working with them.  Is this a onprem enterprise user? or Cloud user?
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your set... See more...
Usually there are events for every 30s or something like this. Probably your environment has some hickups or it’s stalled for some reason? It’s hard to say more without more information about your setups and os, hw knowledge.
Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both nam... See more...
Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both name and ip there could be issues as all peers and SHs are using name or ip to recognize the cluster! I’m not 100% sure if the peers is actually recognized by GUID, but I almost proposed you to do offline not online migration and you need change this to all peers before start them. Also same for other components/nodes.
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode ... See more...
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode is lifted after moving to the new one? 
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And i... See more...
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And is there will be any problem if they remove that unwanted matter like date time?? And they want all json fields values to be extracted not specific and it would be difficult to write regex for all of them.
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my ap... See more...
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my app is working fine then try to add your Python code to this app.    Thanks KV
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified ... See more...
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified those inputs values on setup screen.
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence  ... See more...
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence   Regards, thank you very much  
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(b... See more...
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(bytes/1024/1024,3) If you want overall, then you can use this timechart version index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | timechart span=1d sum(b) as usage_mb | eval usage_mb=round(usage_mb/1024/1024,3) For per index, you can use this index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | bucket span=1d _time | stats sum(b) as bytes by _time idx | eval mb=round(bytes/1024/1024,3) 2. Setup a Monitoring Console:- https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview 
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on t... See more...
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on the Jira app. Ideally I would like to send only one file over to Jira, an archive containing each of the other files. I can create a file and add it to the archive after seeing this post - https://community.splunk.com/t5/Splunk-SOAR/SOAR-Create-File-from-Artifacts/m-p/581662 However, I don't know how I could take each individual file from the vault and add it to this archive before I sent it over. Any help would be appreciated! Thanks
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a searc... See more...
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a search to invoke a Python code to perform the actions. Please can someone guide if that's possible?