All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Really? Only 2022. I may downgrade if that's the case. I have a support ticket working with splunk and so far no luck or mention of version conflict. I may downgrade and test.
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and a... See more...
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and another 4 are Server 2022. Only the Server 2022 boxes have this issue. I've messed around with various .conf settings trying to bandaid it and only "current_only = 1" seems to make a difference I've packed up procmon pml and .dmp files for support to look at... dunno if there's a fix possible.... I'll post back if I hear anything.
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on w... See more...
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on what I am experiencing and hope this makes more sense now.   trigger config sample email alert that gets generated search query shows three events
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under ... See more...
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under a dedicated app and install onto the target server. Example /my_app/local/deploymentclient.conf OR $SPLUNK_HOME/etc/system/local/deploymentclient.conf Config [deployment-client] clientName = $FQDN   (So you may be able to use a Powershell script after install of the UF and try inject that config into clientName section into the file, test on one server manually first and see, if it works) To get the FQDN via powershell Poweshell to get FQDN Name $FQDN = "$env:COMPUTERNAME.$env:USERDNSDOMAIN" Write-Output $FQDN  
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receiv... See more...
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receives the payload from Splunk to construct the Service object. You can use service object at the beginning of your stream_events(inputs, ew):    stanza = self.service.confs["app"] https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ https://docs.splunk.com/DocumentationStatic/PythonSDK/2.0.1/modularinput.html#splunklib.modularinput.Script
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect ev... See more...
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect events and then forward them to splunk server.
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this... See more...
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this by setting the trigger conditions: Trigger alert when: Number of Results is greater than 0 Trigger: For each result Throttle: (checked) Suppress results containing field value: Ticket Suppress triggering for: 24 hours
I had restarted the deployment server already. But the hostname remains the same as short name in the GUI.
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty cap... See more...
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty capture group is allowed, try these settings:   [<sourcetype_name>] CHARSET=AUTO LINE_BREAKER = "platform":"ArcodaSAT"\}() SHOULD_LINEMERGE = false    
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Ins... See more...
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Installanixuniversalforwarder 
Ubuntu on Windows is still Windows.  I had the same problem.  You have to use a real Linux box.
The props.conf file should be on the machine that is parsing your logs. If your log path is UF->HF->Cloud, then likely the HF machine is the one doing the parsing, and it should have the props.conf f... See more...
The props.conf file should be on the machine that is parsing your logs. If your log path is UF->HF->Cloud, then likely the HF machine is the one doing the parsing, and it should have the props.conf file, not the UF. Also, keep in mind that the first capture group of LINE_BREAKER is discarded. It is intended to capture the filler characters that occur between distinct events. If you would like to keep "platform":"ArcodaSAT"} as part of the first event, then it should not be in a capture group. Try this: LINE_BREAKER = \"platform\"\:\"ArcodaSAT\"\}() For SHOULD_LINEMERGE, this would be better set as FALSE unless you would like events to be recombined to make bigger events. If your LINE_BREAKER above works well to separate distinct events, then SHOULD_LINEMERGE should be false SHOULD_LINEMERGE = false
A temporary workaround that worked for us was setting current_only to 1 and restarting the forwarder.... Splunk-wineventlog.exe still crashes and restarts, but it does at least read some events and ... See more...
A temporary workaround that worked for us was setting current_only to 1 and restarting the forwarder.... Splunk-wineventlog.exe still crashes and restarts, but it does at least read some events and send them before it does.
Has any one seen this issue while installing the splunk forwarder in the Freebsd 13.3 ? or any idea why we are getting this ? I am trying to install the splunk forwarder 9.0.2.   This appears to be... See more...
Has any one seen this issue while installing the splunk forwarder in the Freebsd 13.3 ? or any idea why we are getting this ? I am trying to install the splunk forwarder 9.0.2.   This appears to be your first time running this version of Splunk.   Splunk software must create an administrator account during startup. Otherwise, you cannot log in. Create credentials for the administrator account. Characters do not appear on the screen when you type in credentials.   Please enter an administrator username: admin ERROR: pid 18277 terminated with signal 11 (core dumped)
Hello, Background: I am generating alerts around our Office 365 Environment using the Content Pack for Microsoft 365. I have limited search query experience but willing to put in the time to learn ... See more...
Hello, Background: I am generating alerts around our Office 365 Environment using the Content Pack for Microsoft 365. I have limited search query experience but willing to put in the time to learn more as I go. About the Content Pack for Microsoft 365 - Splunk Documentation Trying to accomplish: Runs every 10 minutes > Trigger single alert if "id"/"Ticket" is unique for every result > Throttle for 24 hours This is just an example of my search query:   (index=Office365) sourcetype="o365:service:healthIssue" service="Exchange Online" classification=incident OR advisory status=serviceDegradation OR investigating | eventstats max(_time) as maxtime, by id | where _time = maxtime | mvexpand posts{}.description.content | mvexpand posts{}.createdDateTime | rename posts{}.description.content AS content posts{}.createdDateTime AS postUpdateTime | stats latest(content) AS Content latest(status) AS Status earliest(_time) AS _time latest(postUpdateTime) AS postUpdateTime by service, classification id isResolved | fields _time service classification id Content postUpdateTime Status isResolved | sort + isResolved -postUpdateTime | rename isResolved AS Resolved? service AS Workload id AS Ticket classification AS Classification postUpdateTime AS "Last Update"   would I need a custom trigger? and what result would be required for suppressing?   What Is happening: There could be technically be 3 events based on the search query but the alert will only send 1 email to me (with only 1 event) instead of 3 individual alert emails, with 3 separate events. I am trying to prevent the same alert being generated for the same "Ticket/ID" so if a new event happens it will trigger the alert should I be using a custom trigger? and if so what result would I suppress to prevent multiple alerts of the same "ticket/id"? Any help would be greatful!   Thank you!   
Hi @Kjell.Lönnqvist, I know you asked your question a while ago, but Steve has offered some insight if this is still a question you have. Feel free to jump in and continue the conversation. 
Hi @Abdulrahman.Kazamel, Thank you for asking your question on the Community. I don't fully understand your question. Can you please try explaining again? In case you didn't know, you can always... See more...
Hi @Abdulrahman.Kazamel, Thank you for asking your question on the Community. I don't fully understand your question. Can you please try explaining again? In case you didn't know, you can always check out AppD Docs for helpful information. 
I need to skip 1st 10 lines of key field "_raw"
I would try to reboot the deployment server – It could be cache issue.
It would be better to give us some more context - its helps with trying to help and answer you question. I guess you are trying to remove / filter out some data? This is just guessing on what y... See more...
It would be better to give us some more context - its helps with trying to help and answer you question. I guess you are trying to remove / filter out some data? This is just guessing on what you maybe wanting to do. This is an example using make results, this filters the ticket_id=5678 (so apply same princples for your code)  | makeresults | eval _raw="ticket_id, priority,status 123,P1,Closed 5678,, 8765,P2,Closed" | multikv forceheader=1 | search ticket_id!=5678 | table ticket_id, priority, status