All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since Splunk Enterprise 9.2.0, Splunk has introduced "Deployment Server Scaling", which involves setting Deployment Servers behind a load balancer (or use DNS mapping) and granting all access to a si... See more...
Since Splunk Enterprise 9.2.0, Splunk has introduced "Deployment Server Scaling", which involves setting Deployment Servers behind a load balancer (or use DNS mapping) and granting all access to a single network share. Each DS uses the share path to update and share app configurations and post log files. This allows the DS' to keep apps, client lists and client status in sync between them. While Splunk documentation mentions 50 clients, this is only in reference to ensuring the DS is on its own server, not sharing functionality with any other Splunk instance such as search head, indexer, Monitoring Console, License Manger, etc. A Deployment Server can actually handle up to 25,000 clients, if granted enough system and network resources to manage the load. With Deployment Server scaling, the number of forwarders that can be managed multiplies with each Deployment Server added to the "cluster". Two can manage up to 50,000 clients, three can manage up to 75,000, etc. All Deployment Servers in a cluster share all apps and all clients. DS Scaling is also referred to as "clustering", though it works nothing like indexer or search head clusters-- the different DS's don't communicate with one another directly or formally form a "cluster".  This allows very large environments to manage a multitude of forwarders. Too many forwarders? Add another Deployment Server. Here are a few links: Splunk Documentation: Implement a Deployment Server Cluster Splunk Documentation: Estimate Deployment Server Performance Deployment Server section of this Splunk Lantern article: Scaling your Splunk Deployment, which consolidates relevant Splunk documentation Splunk Community Article: Deployment Server Scalability Best Practices "Discovered Intelligence" blog article on setting up a Splunk Deployment Server cluster. I have not (yet) tested their suggestions but this is a great place to start for a quick overview of what's needed. Deployment Servers are on track for significant improvements in the near future as well, with the goal of reducing/eliminating the need for 3rd party tools such as Puppet or Ansible for those who wish to manage everything within Splunk itself.
Hey Will, I just wanted to say a huge THANK YOU for your help! Your suggestion to increase MAX_DAYS_AGO to 3000 completely solved my issue, and Splunk now correctly recognizes my timestamps. Ho... See more...
Hey Will, I just wanted to say a huge THANK YOU for your help! Your suggestion to increase MAX_DAYS_AGO to 3000 completely solved my issue, and Splunk now correctly recognizes my timestamps. Honestly, I had been struggling with this for quite some time, and your solution saved me a lot of time and frustration. I really appreciate the effort you put into answering my question.   Thanks again, and have a great day!   Best, Emil
Where did you install the custom app?  It must be installed on the indexer(s) to create the index, but it must also be installed on the search head(s) for the index to appear in the GUI.
I have created a index by CLI ( script)on custom application but the index is not reflecting in Splunk gui 
it worked, thanks Whisperer, a helping hand as alwas
Try something like this index=linux host=* sourcetype=bash_history "systemctl start" OR "systemctl enable" OR (mv /opt/) | eval systemctl=if(searchmatch("systemctl"), "systemctl",null()) | eval mo_o... See more...
Try something like this index=linux host=* sourcetype=bash_history "systemctl start" OR "systemctl enable" OR (mv /opt/) | eval systemctl=if(searchmatch("systemctl"), "systemctl",null()) | eval mo_opt=if(searchmatch("mv") AND searchmatch("/opt/"), "mv_opt", null()) | stats dc(mv_opt) as mv_opt dc(systemctl) as systemctl by host | where mv_opt==1 and systemctl==1
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemc... See more...
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemctl start" OR "systemctl enable") | union [search index=linux host=* sourcetype=bash_history (mv AND /opt/ ) ] just to make more clearer, i want a match only  if a server generated a log that contains "mv AND /opt/" and another log that contains "systemctl start" OR "systemctl enable"       thanks in advance
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Ne... See more...
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Network_Traffic | eval objects=json_array_to_mv(json_extract(_raw,"objects")) | mvexpand objects | eval calculations=json_array_to_mv(json_extract(objects,"calculations")) | mvexpand calculations | eval outputFields=json_array_to_mv(json_extract(calculations,"outputFields")) | mvexpand outputFields | eval fieldName=json_extract(outputFields,"fieldName") | eval expression=json_extract(calculations,"expression") | table modelName fieldName expression  
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and there... See more...
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and therefore Splunk is not accepting the date. Let me know if adding MAX_DAYS_AGO=3000 to your extraction config works! Good luck Will
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like ... See more...
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like this:   [01/Jan/2017:02:16:51 -0800] here also a live output: However, Splunk is not recognizing these timestamps and instead assigns the indexing time.   I have tried adjusting the settings in the sourcetype configuration (see screenshot) and have set the following values: • Timestamp format: %d/%b/%Y:%H:%M:%S %z • Timestamp prefix: \[ • Lookahead: 32   Unfortunately, the timestamps are still not recognized correctly. Do I need to modify props.conf or inputs.conf as well? Is my timestamp format correct, or should it be defined differently? Could there be another issue in my extraction settings?   The log file looks like this: Should I maybe change the log file with some scripting in order to change the format?   I would really appreciate any guidance! Thank you in advance.   Best regards
Hi @isoutamo , Thx a lot  . BR
Hi @SanjayReddy  Thanks for the feedback, that screenshot is when receiver is a forwarder. This is a good explanation https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forward... See more...
Hi @SanjayReddy  Thanks for the feedback, that screenshot is when receiver is a forwarder. This is a good explanation https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forwarder-Acknowledgement-explained/m-p/695624 as @isoutamo mentioned. Thanks. 
Hi @takuyaikeda , please try this: index=_audit action=search info=granted search=* NOT "search_id='scheduler" NOT "search=' | history" NOT "user=splunk-system-user" NOT "search='typeahead" NOT "se... See more...
Hi @takuyaikeda , please try this: index=_audit action=search info=granted search=* NOT "search_id='scheduler" NOT "search=' | history" NOT "user=splunk-system-user" NOT "search='typeahead" NOT "search=' | metadata type=* | search totalCount>0" | stats count by user search _time | sort _time | convert ctime(_time) | stats list(_time) as time list(search) as search by user Ciao. Giuseppe
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   ... See more...
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   | datamodel | spath output=modelName modelName |search modelName=Network_Traffic |rex max_match=0 field=_raw "\[\{\"fieldName\":\"(?<fields>[^\"]+)\"" |rex max_match=0 field=_raw "\"expression\":\"(?<expression>.*?)\"}" |table fields expression        
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of a... See more...
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of actions triggered recently (for example, within the past week) by searching for alert_fired="alert_fired" in the _audit index. At this time, is it possible to join the log entries that matched in each search execution to the list? (I want to know the result of "| loadjob <sid>" for each search.) The expected output is a table with the search execution time (_time), the search name (ss_name), and the log entries.
@nsxlogging   Your company’s security policy may be blocking the download of the Splunk app or add-on from Splunkbase to resolve this, forward the error to your IT/security team so they can check fi... See more...
@nsxlogging   Your company’s security policy may be blocking the download of the Splunk app or add-on from Splunkbase to resolve this, forward the error to your IT/security team so they can check firewall/proxy logs, verify if Splunkbase or specific file types are restricted, and whitelist them if justified, or try downloading from a different network (if permitted) or a non-corporate device and transfer via approved methods.
@cyberbilliam  Is this fixed? Need confirmation before migrating to Splunk Cloud.
Actually it needs that replication factor has met on indexers before the ack has sent. You should read below post and also those where are linked there. Here is one old excellent post about it ht... See more...
Actually it needs that replication factor has met on indexers before the ack has sent. You should read below post and also those where are linked there. Here is one old excellent post about it https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forwarder-Acknowledgement-explained/m-p/695624
Hi @Wenjian_Zhu   Indexer acknowledgment will be sent after data written into the disk of indexer.  there is no relation with data replication with indexer acknowledgment acknowledgment is t... See more...
Hi @Wenjian_Zhu   Indexer acknowledgment will be sent after data written into the disk of indexer.  there is no relation with data replication with indexer acknowledgment acknowledgment is to let forwarders know data has been received at the indexer end and forwarder which sent data to indexer , will remove the events from the wait queue. also recommended to enable   acknowledgment at at intermediate forwader and indexer   
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data... See more...
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data to its file system and ensuring the replication factor is met  or The source peer sends acknowledgment after writing the data to its file system.   Best regards,