All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have two splunk search -1, search-2 i have to create splunk alert for search-2 based on search-1. If search-1 count greater than 0 then trigger search-2 alert   regards vch
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the ... See more...
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the CSV file. I don't have access to REST API or Splunk Developer. Please suggest. Thank you for your help
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, yo... See more...
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, your solution worked. Thank you!
Hi @evinasco08, It may take some time for third indexer get replicated copies from other indexers and make them searchable. Did you wait enough time for this operations to finish? It is normal your ... See more...
Hi @evinasco08, It may take some time for third indexer get replicated copies from other indexers and make them searchable. Did you wait enough time for this operations to finish? It is normal your search and replication factors are not met because cluster has only two copies of some buckets while migration. You could monitor this process on Bucket Status page. You should have seen a lot of pending buckets. Cluster would be a complete state after these fix-ups completed. After rollback to RF=2 and SF=2 excess buckets are normal because cluster manager was trying to replicate buckets to match RF=3, SF=3 state, when you rollback these third copies became excess. If you want to keep RF=2, SF=2 you can simply/safely remove excess bucket from Bucket Status page.  Setting RF and SF equal to indexer count is not a best practice. Because if any of your indexers experience problem or restart your cluster will not be able to reach complete state because missing enough peers.  I advise keeping RF=2 and SF=2 with 3 indexers.   
Hi @Amit.Bisht, Thanks for letting me know! Could you come back here and share the outcome with the Support case? 
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new in... See more...
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new indexer to cluster, after that  tryed to change the RF and SF, both to 3, but when i change the values from splunk web in the master node and restart the instance, th aplatform show me the nex message:     then, I did rollabck, return SF=2 and RF=2, and evetrything normal, but the bucket status shows I need to change the SF and RF and I need to know if this will fix the iisues with the indexes Regards  
Hi @jariw, In my experience maintenance-mode and stopping all peer nodes is not necessary. You will already push the configuration from Cluster Manager. Before pushing new configuration, do not for... See more...
Hi @jariw, In my experience maintenance-mode and stopping all peer nodes is not necessary. You will already push the configuration from Cluster Manager. Before pushing new configuration, do not forget testing your new indexes.conf configuration on a standalone instance and repFactor=auto setting (as said  on item 8.a).  
Hi @AchimK, You should delete colon before the port number like below. Or why don't you just delete this stanza from inputs.conf ? [tcp://5514] connection_host = ip host = splunkindex index = linux... See more...
Hi @AchimK, You should delete colon before the port number like below. Or why don't you just delete this stanza from inputs.conf ? [tcp://5514] connection_host = ip host = splunkindex index = linux sourcetype = linux_messages_syslog disabled = 1  
Thanks @ITWhisperer  Using makeresult to pull the time is much faster than index since it only pulls a single event Is it possible to change the font type (bold), color and background in Visual... See more...
Thanks @ITWhisperer  Using makeresult to pull the time is much faster than index since it only pulls a single event Is it possible to change the font type (bold), color and background in Visualization Type "Table"? Thanks again!!
Hi @scottc_3, I would love to learn more about the requirement to use an array. Can you share more about why it's not feasible to separate the values in your use case? 
Step to reproduce 1. Install version: '3.7' services: splunk: image: splunk/splunk:latest container_name: splunk ports: - "8000:8000" - "9997:9997" - "8088:8088" environment: - SPLUNK_STA... See more...
Step to reproduce 1. Install version: '3.7' services: splunk: image: splunk/splunk:latest container_name: splunk ports: - "8000:8000" - "9997:9997" - "8088:8088" environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=Password1 volumes: - splunk_data_var:/opt/splunk/var - splunk_data_etc:/opt/splunk/etc restart: unless-stopped volumes: splunk_data_var: splunk_data_etc:   2. change admin pass from web ui   3. Restart splunk docker instance
Hello, The Splunk connect for Syslog add-on for Thycotic (Product section) shows information that is related to Tenable. See https://splunk.github.io/splunk-connect-for-syslog/1.96.4/sources/Thycoti... See more...
Hello, The Splunk connect for Syslog add-on for Thycotic (Product section) shows information that is related to Tenable. See https://splunk.github.io/splunk-connect-for-syslog/1.96.4/sources/Thycotic/. Please review and update the section.  Best, Pramod
This thread is several months old with an accepted solution so you may get better results by posting a new question.
This is exactly my experience in Splunk Cloud as well.  I fed table data to a dropdown, and dropdown uses an array of the entire result set, instead of listing the values separately.  Can we get a fi... See more...
This is exactly my experience in Splunk Cloud as well.  I fed table data to a dropdown, and dropdown uses an array of the entire result set, instead of listing the values separately.  Can we get a fix for this?
Splunk support responded that this was a known, as yet published, bug in the software.  Was hoping 9.2 release fixed this but sadly it did not
Note: 1) The spath command can be expensive, especially against large data sets 2) If all you need is to parse a string and get the values, consider regular expressions for json data also. In the ... See more...
Note: 1) The spath command can be expensive, especially against large data sets 2) If all you need is to parse a string and get the values, consider regular expressions for json data also. In the rex below, I named the a|b|c|d field "foo", in case it had value later on. If not, it doesn't need to be used | makeresults ```creating dummy data based on the original question``` | eval json_data="{data: {a : { x: {value_x} y: {value_y}}} }" | append [ makeresults | eval json_data="{data: {b : { x: {value_x} y: {value_y}}} }" ] | append [ makeresults | eval json_data="{data: {c : { x: {value_x} y: {value_y}}} }" ] | append [ makeresults | eval json_data="{data: {d : { x: {value_x} y: {value_y}}} }" ] ```ending the creation of dummy data``` | rex field=json_data "{(?<foo>\w+)\s:\s{\s\sx:\s{(?<x_value>.+)}\s\sy:\s{(?<y_value>.+)}}}" ```parse strings using a regular expression``` | table json_data x_value y_value ```display results of regular expression in a table``` Results in:  
It looks like logs from ESET are encrypted.... because yes. I tried with syslog-ng and rsyslog but result is the same. I saw in the network that similar issue was reported directly to ESET
Hi, Did you find solution? I have the same issue.  
You can save a search as a report and then open "advanced edit" from settings -> searches, reports, and alerts -> "edit' dropdown. Then search for "preview" and disable it there. You will find an ... See more...
You can save a search as a report and then open "advanced edit" from settings -> searches, reports, and alerts -> "edit' dropdown. Then search for "preview" and disable it there. You will find an option similar to "display.general.enablePreview" and it defaults to the number 1 for "True". Change it to 0 and click the save butotn. Then you can just use | savedsearch "YourReportName" https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Savedsearch This is particularly useful if you're using an external system to pull the data via API and the developers of the integration were unaware of the preview function being enabled by the default mode of operation in Splunk search.
Hi another option is use those props.conf and transforms.conf files as you already have looked. Here is one old post to do it https://community.splunk.com/t5/Monitoring-Splunk/FortiGate-Firewall-is-... See more...
Hi another option is use those props.conf and transforms.conf files as you already have looked. Here is one old post to do it https://community.splunk.com/t5/Monitoring-Splunk/FortiGate-Firewall-is-consuming-the-license/m-p/648680 There are lot of other examples in community and also on docs.splunk.com.  One thing what you must remember is that you must put those configurations on 1st full splunk instance from source to indexers. This could be a HF or an indexer. r. Ismo