All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks... See more...
@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you los... See more...
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you lose your static captain your SHC will more or less fall apart.
Ok. I recognize filterd logs. What is your business case here?
One important thing - you can't add or remove something to/from csv lookup. You can only overwrite it as a whole.
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the fi... See more...
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the field name, another for value and either use $1::$2 for format if using unnamed groups or name them _KEY_1 and _VAL_1 respectively if using named groups. If you want to do that in SPL you need to use the {} notation. Like | eval {fieldname}=fieldvalue Where fieldname is a field containing your target field name. Most probably you'll want to split your input into key:value chunks as multivalued field, then use foreach to iterate over those chunks and split them into final key-value pairs and use the {key} notation to define the output field.
Try this one : <your_search>| rex field=Tags "avd:(?<avd>[^,]+),\s*dept:(?<dept>[^,]+),\s*cm-resource-parent:(?<cm_resource_parent>[^,]+),\s*manager:(?<manager>[^$]+)" ------ If you find this solu... See more...
Try this one : <your_search>| rex field=Tags "avd:(?<avd>[^,]+),\s*dept:(?<dept>[^,]+),\s*cm-resource-parent:(?<cm_resource_parent>[^,]+),\s*manager:(?<manager>[^$]+)" ------ If you find this solution helpful, please consider accepting it and awarding karma points !!  
Two things. 1. A Heavy Forwarder is a Splunk Enterprise instance. It's just doing forwarding. 2. If you can receive your UDP traffic at the forwarder why send it to another Splunk instance with sys... See more...
Two things. 1. A Heavy Forwarder is a Splunk Enterprise instance. It's just doing forwarding. 2. If you can receive your UDP traffic at the forwarder why send it to another Splunk instance with syslog instead of native Splunk protocol?
Perhaps this answer will help: https://community.splunk.com/t5/Splunk-Enterprise/Having-Syslog-logs-into-SPLUNK/m-p/693546/highlight/true#M19778
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegr... See more...
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1, manager:JohnDoe@email.com" I'd like to split each of these tags up into their own field/value, AND extract the first part of the tag as the field name. Result of new fields/values would look like this: avd="vm" dept="support services" cm-resource-parent="/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1" manager="JohnDoe@email.com" I've looked at a lot of examples with rex, MV commands, etc, but nothing that pulls the new field name out of the original field. The format of that Tags field is always the same as listed above, for all events. Thank you!
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled proces... See more...
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled processors.  Add following line as is in default-mode.conf.   #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor     NOTE:  PLEASE DON'T APPLY ON HF/SH/IDX/CM/DS. You want to use different app( not SplunkUniversalForwarder app) to push the change.
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog ... See more...
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog via TLS to the server running my Splunk Enterprise Instance?
i cant find my splunk login page where should i check the splunk enterprise login page  
hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. Pro tip: It is critical to give full use case and all rel... See more...
hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. Pro tip: It is critical to give full use case and all relevant data when asking a question.  The solution is the same, just add location to output.  But before I illustrate code, you also need to answer the question whether location info is available in index data.  My speculation is not.  But that's just speculation.  It is very important to describe nuances. Anyway, suppose location is not in index data, here is the search you can use: index=* host=* ip=* mac=* | fields host ip mac | dedup host ip mac | lookup hostname.csv hostname AS host output hostname AS match location | table host ip mac location | outputlookup hostname.csv  Of course, location will be blank for any host that didn't have location in the old version of hostname.csv.
thank you for your help. hostname.csv ip                     mac                           hostname                location x.x.x.x                                                abc_01           ... See more...
thank you for your help. hostname.csv ip                     mac                           hostname                location x.x.x.x                                                abc_01                     NYC                        00:00:00                  def_02                       DC x.x.x.y           00:00:11                  ghi_03                        Chicago                                                             jkl_04                         LA   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. new hostname.csv file. ip                              mac                                       hostname                     location x.x.x.x                  00:new:mac                            abc_01                       NYC_orig x.x.y.new            00:00:00                                   def_02                        DC_orig x.x.x.y                  00:00:11                                    ghi_03                        Chicago_orig new.ip                new:mac                                      jkl_04                        LA_orig thank you.
I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so ... See more...
I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so grouping by hdr_mid and qid in final stats statement will allow to pull list of recipients. This is why you need to describe the full use case including all relevant data, not just those you are trying to extract something. @gcusello's idea is still applicable here; you just substitute stats with eventstats. <your-search> | eventstats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid | stats whatever by hdr_mid qid  
Let me try to understand the requirement.  You will only compare hostname then add ip and mac from index, but only if hostname already exists in hostname.csv.  Is this correct? lookup is your friend.... See more...
Let me try to understand the requirement.  You will only compare hostname then add ip and mac from index, but only if hostname already exists in hostname.csv.  Is this correct? lookup is your friend. index=* host=* ip=* mac=* | fields host ip mac | dedup host ip mac | lookup hostname.csv hostname AS host output hostname AS match | table host ip mac | outputlookup hostname.csv
Quick summary: After @sainag_splunk identified this as a 9.2 bug, I updated the affected instance to 9.3 and the problem is gone.
The disabled setting in SHC only impacts captain election and member roster management.
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01  ... See more...
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01                        00:00:00                  def_02 x.x.x.y           00:00:11                  ghi_03                                                             jkl_04   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this. new hostname.csv file. ip                              mac                                       hostname x.x.x.x                  00:new:mac                            abc_01 x.x.y.new            00:00:00                                   def_02 x.x.x.y                  00:00:11                                    ghi_03 new.ip                new:mac                                      jkl_04   thank you for your help!!!
What if you have multiple that you want to rename the same? | rename "Message.TaskInfo.CarHop Backup.LastResult"="-2147020576" AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=1 AS Re... See more...
What if you have multiple that you want to rename the same? | rename "Message.TaskInfo.CarHop Backup.LastResult"="-2147020576" AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=1 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=0 AS Result | rename "Message.TaskInfo.AI Restart Weekly.LastResult"=267011 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=267009 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=2 AS Result This is not working for me