All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog ... See more...
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog via TLS to the server running my Splunk Enterprise Instance?
i cant find my splunk login page where should i check the splunk enterprise login page  
hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. Pro tip: It is critical to give full use case and all rel... See more...
hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. Pro tip: It is critical to give full use case and all relevant data when asking a question.  The solution is the same, just add location to output.  But before I illustrate code, you also need to answer the question whether location info is available in index data.  My speculation is not.  But that's just speculation.  It is very important to describe nuances. Anyway, suppose location is not in index data, here is the search you can use: index=* host=* ip=* mac=* | fields host ip mac | dedup host ip mac | lookup hostname.csv hostname AS host output hostname AS match location | table host ip mac location | outputlookup hostname.csv  Of course, location will be blank for any host that didn't have location in the old version of hostname.csv.
thank you for your help. hostname.csv ip                     mac                           hostname                location x.x.x.x                                                abc_01           ... See more...
thank you for your help. hostname.csv ip                     mac                           hostname                location x.x.x.x                                                abc_01                     NYC                        00:00:00                  def_02                       DC x.x.x.y           00:00:11                  ghi_03                        Chicago                                                             jkl_04                         LA   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this.  the based_search doesn't have location. I would like to keep the location column as it. new hostname.csv file. ip                              mac                                       hostname                     location x.x.x.x                  00:new:mac                            abc_01                       NYC_orig x.x.y.new            00:00:00                                   def_02                        DC_orig x.x.x.y                  00:00:11                                    ghi_03                        Chicago_orig new.ip                new:mac                                      jkl_04                        LA_orig thank you.
I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so ... See more...
I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so grouping by hdr_mid and qid in final stats statement will allow to pull list of recipients. This is why you need to describe the full use case including all relevant data, not just those you are trying to extract something. @gcusello's idea is still applicable here; you just substitute stats with eventstats. <your-search> | eventstats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid | stats whatever by hdr_mid qid  
Let me try to understand the requirement.  You will only compare hostname then add ip and mac from index, but only if hostname already exists in hostname.csv.  Is this correct? lookup is your friend.... See more...
Let me try to understand the requirement.  You will only compare hostname then add ip and mac from index, but only if hostname already exists in hostname.csv.  Is this correct? lookup is your friend. index=* host=* ip=* mac=* | fields host ip mac | dedup host ip mac | lookup hostname.csv hostname AS host output hostname AS match | table host ip mac | outputlookup hostname.csv
Quick summary: After @sainag_splunk identified this as a 9.2 bug, I updated the affected instance to 9.3 and the problem is gone.
The disabled setting in SHC only impacts captain election and member roster management.
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01  ... See more...
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01                        00:00:00                  def_02 x.x.x.y           00:00:11                  ghi_03                                                             jkl_04   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this. new hostname.csv file. ip                              mac                                       hostname x.x.x.x                  00:new:mac                            abc_01 x.x.y.new            00:00:00                                   def_02 x.x.x.y                  00:00:11                                    ghi_03 new.ip                new:mac                                      jkl_04   thank you for your help!!!
What if you have multiple that you want to rename the same? | rename "Message.TaskInfo.CarHop Backup.LastResult"="-2147020576" AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=1 AS Re... See more...
What if you have multiple that you want to rename the same? | rename "Message.TaskInfo.CarHop Backup.LastResult"="-2147020576" AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=1 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=0 AS Result | rename "Message.TaskInfo.AI Restart Weekly.LastResult"=267011 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=267009 AS Result | rename "Message.TaskInfo.CarHop Backup.LastResult"=2 AS Result This is not working for me  
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed ... See more...
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed the : https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/BindSplunktoanIP It's related to editing the correct 'web.conf' file.  The document basically not telling which 'web.conf' file to edit. There are Eight 'web.conf' files in total. if you try coping 'web.conf' from 'Defaullt' folder into 'local' folder and editing 'mgmtHostPort' (with correct IP port etc,) it still does not work. Resolution : if you edit 'web.conf' file (for mgmtHostPort)  in the below location, it works perfectly i.e. you are able to launch the Splunk with 'IP address:8000' port. C:/Programme Files/Splunk/var/run/splunk/merged/web.conf However: If you restart Splunk Enterprise via the web console i.e. 'Restart button' within the application. The settings goes and you need to do it again. If you restart via Microsoft services>splunkd.service -- there is no problem. Environment Used : Splunk Enterprise - 9.3.1 (On-prem)    OS : Windows Server 2022 Also tried on Splunk Enterprise - 9.2.1  (On-prem) - Has same problem.    
Hi gcusello, Thanks for a quick reply. Unfortunately this approach in its entirety will not work as there are more events that these two in a "send an email" group of events. All events except the ... See more...
Hi gcusello, Thanks for a quick reply. Unfortunately this approach in its entirety will not work as there are more events that these two in a "send an email" group of events. All events except the first one from the two I posted have both hdr_mid and qid fields so I group them by these fields in stats. Also, only these two events i posted have rprt set of fields with s in them. I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so grouping by hdr_mid and qid in final stats statement will allow to pull list of recipients. BTW, the values statement below is exactly what I was looking for to pull rctps field from proper event. values(eval(if(cmd="send",rcpts,""))) AS rcpts  
Ok so push the authentication.conf from the deployer and on each search head and create a authentication.conf in system/local without the bind password in the stanza in clear text. something like thi... See more...
Ok so push the authentication.conf from the deployer and on each search head and create a authentication.conf in system/local without the bind password in the stanza in clear text. something like this  [ldap_1] bindDNpassword = abc123 then do a rolling restart on all of sh cluster and then the password should be encrypted
Don't understand the question I have a non clustered Indexing infrastructure (8 standalone indexers). @sainag_splunk wrote: I don't think that required.  I decide... See more...
Don't understand the question I have a non clustered Indexing infrastructure (8 standalone indexers). @sainag_splunk wrote: I don't think that required.  I decide what's required and i want the SHC to replicate distsearch.conf My question was another, leave replicate_search_peers away, What changes in a SHC by setting "disabled = true or false"? By default is true.
Hello @splunksuperman oneshot is typically used in development/testing environments when you want to do below. you can try for loop and wrap the command in it. Upload a file once to Splunk Don't w... See more...
Hello @splunksuperman oneshot is typically used in development/testing environments when you want to do below. you can try for loop and wrap the command in it. Upload a file once to Splunk Don't want to set up an ongoing input Need to directly copy data into Splunk Enterprise Refer: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorfilesanddirectoriesusingtheCLI#Monitor_Splunk_Enterprise_files_and_directories_with_the_CLI For your data onboarding consider using a monitored input on a forwarder or use the splunk add-on for Microsoft Windows if dealing with Windows files. If this Helps, Please UpVote
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have h... See more...
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have happened and its risk score has remained the same. I have tried adjusting our base correlation search's throttling to throttle by risk object over every 7 days, because our correlation search goes back over the last 7 day's worth of alerts to determine whether or not to trigger a notable.  Which brings me to this question: do the underlying alerts (i.e., the alerts that contribute to generating a risk score which ultimately determines if a risk object is generated or not) also need to be throttled for the past 7 days? Right now the throttling settings for those alerts are set to throttle by username over the past 1 day. 
@verbal_666 May I know the reason you are tryin to update this? Basically replicate search peers is when you want to  add a non-clustered indexer via the GUI, replicate that to your oth... See more...
@verbal_666 May I know the reason you are tryin to update this? Basically replicate search peers is when you want to  add a non-clustered indexer via the GUI, replicate that to your other SHC peers, I don't think that required.        Refer: https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Connectclustersearchheadstosearchpeers#Replicate_the_search_peers_across_the_cluster   I have only seen  replicate_search_peers is used where you have search head cluster is in use with no index clustering, so if you want to search the non clustered indexers, its set to false.  If you want it to replicate its set to true. replicate_search_peers = true Only works if disabled = false Automatically copies search peer connections to all SHC members Most useful when connecting SHC to non-clustered indexers If this helps., Please Upvote
Hi @SplunkUser001 , you can do this using stats, something like this: <your-search> | stats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid Ciao.... See more...
Hi @SplunkUser001 , you can do this using stats, something like this: <your-search> | stats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid Ciao. Giuseppe
Hi @d123r432k , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @d123r432k , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into f... See more...
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into first event. Is this possible? In final stats I group events by hdr_mid and qid so I need hdr_mid value present in first event if I want to extract all recipients email addresses.  To do so I need to pull rcpts from first event and not  the second. How would I do that? Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426217+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 cmd=send profile=mail qid=49O9Yi2a005119 rcpts=1@company.com,2@company.com,3@company.com...52@company.com Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426568+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=52 routes=allow_relay,default_inbound,internalnet size=4416 guid=Rze4pxSO_BZ4kUYS0OtXqLZjW3uHSx8d hdr_mid=<103502694.595.1729795616099.JavaMail.psoft@xyz123> qid=49O9Yi2a005119 hops-ip=x.x.x.x subject="Message subject" duration=0.271 elapsed=0.325