All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed ... See more...
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed the : https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/BindSplunktoanIP It's related to editing the correct 'web.conf' file.  The document basically not telling which 'web.conf' file to edit. There are Eight 'web.conf' files in total. if you try coping 'web.conf' from 'Defaullt' folder into 'local' folder and editing 'mgmtHostPort' (with correct IP port etc,) it still does not work. Resolution : if you edit 'web.conf' file (for mgmtHostPort)  in the below location, it works perfectly i.e. you are able to launch the Splunk with 'IP address:8000' port. C:/Programme Files/Splunk/var/run/splunk/merged/web.conf However: If you restart Splunk Enterprise via the web console i.e. 'Restart button' within the application. The settings goes and you need to do it again. If you restart via Microsoft services>splunkd.service -- there is no problem. Environment Used : Splunk Enterprise - 9.3.1 (On-prem)    OS : Windows Server 2022 Also tried on Splunk Enterprise - 9.2.1  (On-prem) - Has same problem.    
Hi gcusello, Thanks for a quick reply. Unfortunately this approach in its entirety will not work as there are more events that these two in a "send an email" group of events. All events except the ... See more...
Hi gcusello, Thanks for a quick reply. Unfortunately this approach in its entirety will not work as there are more events that these two in a "send an email" group of events. All events except the first one from the two I posted have both hdr_mid and qid fields so I group them by these fields in stats. Also, only these two events i posted have rprt set of fields with s in them. I was thinking about somehow matching these two events on s and qid so I can insert field with hdr_mid value into first event. This will allow me to have all events with hdr_mid and qid in them so grouping by hdr_mid and qid in final stats statement will allow to pull list of recipients. BTW, the values statement below is exactly what I was looking for to pull rctps field from proper event. values(eval(if(cmd="send",rcpts,""))) AS rcpts  
Ok so push the authentication.conf from the deployer and on each search head and create a authentication.conf in system/local without the bind password in the stanza in clear text. something like thi... See more...
Ok so push the authentication.conf from the deployer and on each search head and create a authentication.conf in system/local without the bind password in the stanza in clear text. something like this  [ldap_1] bindDNpassword = abc123 then do a rolling restart on all of sh cluster and then the password should be encrypted
Don't understand the question I have a non clustered Indexing infrastructure (8 standalone indexers). @sainag_splunk wrote: I don't think that required.  I decide... See more...
Don't understand the question I have a non clustered Indexing infrastructure (8 standalone indexers). @sainag_splunk wrote: I don't think that required.  I decide what's required and i want the SHC to replicate distsearch.conf My question was another, leave replicate_search_peers away, What changes in a SHC by setting "disabled = true or false"? By default is true.
Hello @splunksuperman oneshot is typically used in development/testing environments when you want to do below. you can try for loop and wrap the command in it. Upload a file once to Splunk Don't w... See more...
Hello @splunksuperman oneshot is typically used in development/testing environments when you want to do below. you can try for loop and wrap the command in it. Upload a file once to Splunk Don't want to set up an ongoing input Need to directly copy data into Splunk Enterprise Refer: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorfilesanddirectoriesusingtheCLI#Monitor_Splunk_Enterprise_files_and_directories_with_the_CLI For your data onboarding consider using a monitored input on a forwarder or use the splunk add-on for Microsoft Windows if dealing with Windows files. If this Helps, Please UpVote
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have h... See more...
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have happened and its risk score has remained the same. I have tried adjusting our base correlation search's throttling to throttle by risk object over every 7 days, because our correlation search goes back over the last 7 day's worth of alerts to determine whether or not to trigger a notable.  Which brings me to this question: do the underlying alerts (i.e., the alerts that contribute to generating a risk score which ultimately determines if a risk object is generated or not) also need to be throttled for the past 7 days? Right now the throttling settings for those alerts are set to throttle by username over the past 1 day. 
@verbal_666 May I know the reason you are tryin to update this? Basically replicate search peers is when you want to  add a non-clustered indexer via the GUI, replicate that to your oth... See more...
@verbal_666 May I know the reason you are tryin to update this? Basically replicate search peers is when you want to  add a non-clustered indexer via the GUI, replicate that to your other SHC peers, I don't think that required.        Refer: https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Connectclustersearchheadstosearchpeers#Replicate_the_search_peers_across_the_cluster   I have only seen  replicate_search_peers is used where you have search head cluster is in use with no index clustering, so if you want to search the non clustered indexers, its set to false.  If you want it to replicate its set to true. replicate_search_peers = true Only works if disabled = false Automatically copies search peer connections to all SHC members Most useful when connecting SHC to non-clustered indexers If this helps., Please Upvote
Hi @SplunkUser001 , you can do this using stats, something like this: <your-search> | stats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid Ciao.... See more...
Hi @SplunkUser001 , you can do this using stats, something like this: <your-search> | stats values(hdr_mid) AS hdr_mid values(eval(if(cmd="send",rcpts,""))) AS rcpts BY s qid Ciao. Giuseppe
Hi @d123r432k , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @d123r432k , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into f... See more...
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into first event. Is this possible? In final stats I group events by hdr_mid and qid so I need hdr_mid value present in first event if I want to extract all recipients email addresses.  To do so I need to pull rcpts from first event and not  the second. How would I do that? Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426217+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 cmd=send profile=mail qid=49O9Yi2a005119 rcpts=1@company.com,2@company.com,3@company.com...52@company.com Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426568+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=52 routes=allow_relay,default_inbound,internalnet size=4416 guid=Rze4pxSO_BZ4kUYS0OtXqLZjW3uHSx8d hdr_mid=<103502694.595.1729795616099.JavaMail.psoft@xyz123> qid=49O9Yi2a005119 hops-ip=x.x.x.x subject="Message subject" duration=0.271 elapsed=0.325
I solved this by making a new searchhead cluster with the same machines with the same names. When I ran the command everything went fine splunk edit cluster-config -mode searchhead -manager_uri http... See more...
I solved this by making a new searchhead cluster with the same machines with the same names. When I ran the command everything went fine splunk edit cluster-config -mode searchhead -manager_uri https://10.152.31.202:8089 -secret newsecret123 -auth login:password   The problem was initially that I installed the deployer on the manager node. When I was about to install the enterprise security instance, it needed to be installed on the deployer for some reason. Now everything works as intended, I hope
@gcusello    I have one indexer and inside that i have created one index and i couldn't fetch data of that index from search head but i can fetch it from the indexer. Thanks
Hi @RAVISHANKAR , can you access other indexes or not? Ciao. Giuseppe
It works! Thank you very much MuS!
@gcusello - do we need to check anything else further ??
can you post what you ended up with and accept an answer that helped as the solution? even if it's your own ( i believe you can do that) Glad to hear you got where you needed to go!
@gcusello - yes this is done and it showing as status up and replication was successfull. Thanks
@gcusello  -   could you please explain a bit more in detail..   configured Distributed Search in Settings, configuring the Indexers for searching? - in indexer or in search head ?? Thanks
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the... See more...
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the server for a long time and track the daily changes in these numbers. - Method: Since I need daily statistics, I must perform the import action daily. However, without any configuration changes, Splunk defaults to using "Last Communicaiton" as "_time", which is not what I want. I need "_time" to reflect the date of the import. This way, I can track changes in the count of "Last " records within each day's imported data. I can't use folder or file monitoring for this because it only adds new data, so my only options are to use oneshot or to perform the import via the Web interface. Is my approach correct? If not, what other methods could be used to handle this?   I could use splunk oneshot to upload the file to the Splunk indexer, but I couldn't adjust the date to the import day or specific day.   The example I used the command:   splunk add oneshot D:\upload.csv -index indexdemo     I want the job will run automatically. So I don't want to change any content to the file. How could I do?  
Hi @RAVISHANKAR , did you configured Distributed Search in Settings, configuring the Indexers for searching? Ciao. Giuseppe