All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://splunkbase.splunk.com/app/3696 Get this app and place it on your DMC as best practice.  The rest calls will access anything that is a search peer.  The DMC node typically has your entire env... See more...
https://splunkbase.splunk.com/app/3696 Get this app and place it on your DMC as best practice.  The rest calls will access anything that is a search peer.  The DMC node typically has your entire environment as a search peer in order to monitor the environment.  The app does suggest install on a search head, but you might miss access to CM, HF, etc. Of course if you have a single node cluster then there is no need to worry about where you install.  Follow the instructions for a cloud based environment.
Hi @yuanliu , Did able to find the solution for this issue? we are also facing same issue.
Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoin... See more...
Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoints "/services/configs/conf-*" But as I understood, these endpoints show only configuration files stored under /system/local/*.conf Is it a way to access config files stored under /manager-apps/local ?
@richgalloway Hi there. Thanks for the answer about MGMT port. I little confusing your answer about that UF do not support HEC. Previous version 8.2.6 of UF does working fine as HEC with binded 808... See more...
@richgalloway Hi there. Thanks for the answer about MGMT port. I little confusing your answer about that UF do not support HEC. Previous version 8.2.6 of UF does working fine as HEC with binded 8088 port and forward through TCP data to Indexer nodes (9997) . Maybe Splunk removed it logic from UF in next versions after 8.2.6? What is replacement for HEC? We using UF because parsing do not using license. What is latest version of UF that can be configured as HTTP Event Collector?
The procedure you're thinking of is common and works well.  Good luck!
Exactly what I was looking for, thank you so much !! @ITWhisperer 
There is the management mode setting that controls whether the UF listens to a TCP port or via UDS.  See https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/AboutManagementMode The manag... See more...
There is the management mode setting that controls whether the UF listens to a TCP port or via UDS.  See https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/AboutManagementMode The management port itself is set in web.conf, not inputs .conf (it's not a data input). [settings] mgmtHostPort = 127.0.0.1:9089 UFs do not support HTTP input.
You might also need global=false on the streamstats | streamstats current=f global=f last(rxError) as priorErr last(_time) as priorTim by host
The coalesce will work it is just that if the count is 1 it could be that it only occurs in component1 or component2 and you would have to do something slightly different if you want to distinguish w... See more...
The coalesce will work it is just that if the count is 1 it could be that it only occurs in component1 or component2 and you would have to do something slightly different if you want to distinguish which set the component comes from
OK, trying: index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rx... See more...
OK, trying: index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rxDrop>\d+)\s" | sort - _time | streamstats current=f last(rxError) as priorErr last(_time) as priorTim by host | where not (rxError=priorErr) | chart last(rxError), last(rxPackets), last(rxDrop) by host Will that show me when rxError changes?
Right now I'm just running proof of concept.  I'll move the field definitions to the indexers later.  Right now I'm trying to detect if diff pos1=last(rxError) pos2=last-1(rxError) I want to detec... See more...
Right now I'm just running proof of concept.  I'll move the field definitions to the indexers later.  Right now I'm trying to detect if diff pos1=last(rxError) pos2=last-1(rxError) I want to detect when the value or rxError changes from last-1 to last.  Working on that.  
Hi @PickleRick  just to inform you. I have replaced below endpoint but still the mismatch of the timestamp issue persist.   
Thanks. But I research documentation how to enable HEC from configuration files - no results. And do not find any link how to enable management port. Maybe you can help with direct link?   $cat /op... See more...
Thanks. But I research documentation how to enable HEC from configuration files - no results. And do not find any link how to enable management port. Maybe you can help with direct link?   $cat /opt/splunkforwarder/etc/apps/splunk_httpinput/local/inputs.conf:   [http] disabled = 0     $cat /opt/splunkforwarder/etc/system/local/inputs.conf:   [http] disabled = 0 [http://input] disabled = 0     Used: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/UseHECusingconffiles  
We are collecting every 10 minutes and have about 1000 servers with another 1000 coming early next year.  We have interest long term in monitoring all of the output for general network health.  The t... See more...
We are collecting every 10 minutes and have about 1000 servers with another 1000 coming early next year.  We have interest long term in monitoring all of the output for general network health.  The task at hand is being able to check if there are network issues when we also notice Ceph OSD issues.  The advice for that is to look for dropped packets on the host side.  So, that is what I'm trying to capture and detect when the dropped packet value changes.  
Yes:    splunk = client.connect(host='localhost', port=8089, splunkToken='eyJraWQiOiJzc.....)
You have to use /services/collector/event?auto_extract_timestamp=true  
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configur... See more...
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configured Add-on for Unix and Linux. And what index will they appear in? Thanks! Inside /Applications/SplunkForwarder/etc/system/local i have: inputs.conf, outputs.conf, server.conf. inputs.conf     [monitor:///var/log/system.log] disabled = 0     outputs.conf     [tcpout:default-autolb-group] server = ip:9997 compressed = true [tcpout-server://ip:9997]     server.conf     [general] serverName = pass4SymmKey = [sslConfig] sslPassword = [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free        
adding appname in machine agent configuration file never work.  we need to add below parameter in Application startup file (for java agent) -Dappdynamics.agent.uniqueHostId=<unique_Host_Name> once... See more...
adding appname in machine agent configuration file never work.  we need to add below parameter in Application startup file (for java agent) -Dappdynamics.agent.uniqueHostId=<unique_Host_Name> once you add above line...restart the application/jvm and then restart the machine agent once. I will works for sure
Hi, @ITWhisperer , actually it is not subset. its just that im passing different token for taskand getting the 2nd table. In this case will coalesce will work? index=abc task="$task1$"|dedup compone... See more...
Hi, @ITWhisperer , actually it is not subset. its just that im passing different token for taskand getting the 2nd table. In this case will coalesce will work? index=abc task="$task1$"|dedup component1 |table component1 |append [index=abc task="$task2$" |dedup component2 |table component2] |table component1 component2  
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been require... See more...
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been required that all members of a cluster must have the same OS and version. I was thinking to simply add one new indexer (redhat 9 vm) at the time and dettach an old one forcinf the buckets count. So for a short-time the cluster would have members with different OS versions. Upgrading from Red-Hat 7 to Red.Had 9 directly in the splunk enviroment is not possible. I would like to know if there are critical issues to face while the migration is happening?  I hope the procedure won't last more than 2 days.