All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:84... See more...
It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:8443/uapi There are mentions of the uapi endpoint in the file in the "JAMF Pro Add on for Splunk" app at /JAMF-Pro-addon-for-splunk/bin/uapiModels/devices.py and jamfpro.py in the same directory, so likely the app does use the Pro API as well as the classic API. However the code for jamfpro.py suggests that it uses basic authentication with username and password to obtain a bearer token, with no mention of Access Token, Client ID, or Client Secret. Thus it is likely the answer to your question about authentications is that the app only supports basic authentication.     class JamfPro: class JamfUAPIAuthToken(object): .... def get_token(self): url = self.server_url + 'api/v1/auth/token' logging.info("JSSAuthToken requesting new token") userpass = self._auth[0] + ':' + self._auth[1] encoded_u = base64.b64encode(userpass.encode()).decode() headers = {"Authorization": "Basic %s" % encoded_u} for key in self.extraHeaders: headers[key] = self.extraHeaders[key] response = self.helper.send_http_request(url="https://" + url, method="POST", headers=headers, use_proxy=self.useProxy) if response.status_code != 200: raise Exception self.unix_timestamp() + 60 self._set_token(response.json()['token'], self.unix_timestamp() + 60)  
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server pl... See more...
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server platforms. Honestly if my infrastructure wasn't already up and running on 2022 I'd downgrade to 2019.
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      tas... See more...
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      task.com.company.job2     {            duration = value2      }      task3.com.company.job1 =     {            duration = value3      } }   I did the following  | spath path=result.process_dict output=process_data | eval d_json = json(process_data), d_keys = json_keys(d_json), d_mv = json_array_to_mv(d_keys) ... | eval duration_type = ".duration" ... | eval duration = json_extract(process_data, d_mv.'duration_type') I am not able to capture the value from "duration" key. HOWEVER, if the key was just a single word (without '.'), this would work. ie.      task_com    instead of      task.com.company.job2   TIA
That was my mistake was testing out other possibilities on the "result" thinking that would help. I changed it to just "Ticket" and I received three separate email alerts, thank you!    
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic rep... See more...
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic report collection and sending?
Is there a reason you are using "$result.title$" instead of "Ticket" in the "Suppress results containing field value" field?
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type las... See more...
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type last_successful_login | eval days_since_last_login = round((now() - last_successful_login)/86400,1) | sort - days_since_last_login   Then for each one, you can use the various REST apis for knowledge objects, listed at https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTaccess e.g. for field extractions:   | rest /services/data/props/extractions splunk_server=local | search eai:acl.owner = "<nameofinactiveuser>" | table attribute author eai:acl.app eai:acl.owner stanza title updated type value   Unfortunately there is no endpoint for "all knowledge objects", so you'll have to REST call for each separate type. EDIT: nvm richgalloway found one
Really? Only 2022. I may downgrade if that's the case. I have a support ticket working with splunk and so far no luck or mention of version conflict. I may downgrade and test.
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and a... See more...
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and another 4 are Server 2022. Only the Server 2022 boxes have this issue. I've messed around with various .conf settings trying to bandaid it and only "current_only = 1" seems to make a difference I've packed up procmon pml and .dmp files for support to look at... dunno if there's a fix possible.... I'll post back if I hear anything.
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on w... See more...
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on what I am experiencing and hope this makes more sense now.   trigger config sample email alert that gets generated search query shows three events
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under ... See more...
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under a dedicated app and install onto the target server. Example /my_app/local/deploymentclient.conf OR $SPLUNK_HOME/etc/system/local/deploymentclient.conf Config [deployment-client] clientName = $FQDN   (So you may be able to use a Powershell script after install of the UF and try inject that config into clientName section into the file, test on one server manually first and see, if it works) To get the FQDN via powershell Poweshell to get FQDN Name $FQDN = "$env:COMPUTERNAME.$env:USERDNSDOMAIN" Write-Output $FQDN  
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receiv... See more...
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receives the payload from Splunk to construct the Service object. You can use service object at the beginning of your stream_events(inputs, ew):    stanza = self.service.confs["app"] https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ https://docs.splunk.com/DocumentationStatic/PythonSDK/2.0.1/modularinput.html#splunklib.modularinput.Script
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect ev... See more...
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect events and then forward them to splunk server.
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this... See more...
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this by setting the trigger conditions: Trigger alert when: Number of Results is greater than 0 Trigger: For each result Throttle: (checked) Suppress results containing field value: Ticket Suppress triggering for: 24 hours
I had restarted the deployment server already. But the hostname remains the same as short name in the GUI.
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty cap... See more...
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty capture group is allowed, try these settings:   [<sourcetype_name>] CHARSET=AUTO LINE_BREAKER = "platform":"ArcodaSAT"\}() SHOULD_LINEMERGE = false    
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Ins... See more...
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Installanixuniversalforwarder 
Ubuntu on Windows is still Windows.  I had the same problem.  You have to use a real Linux box.
The props.conf file should be on the machine that is parsing your logs. If your log path is UF->HF->Cloud, then likely the HF machine is the one doing the parsing, and it should have the props.conf f... See more...
The props.conf file should be on the machine that is parsing your logs. If your log path is UF->HF->Cloud, then likely the HF machine is the one doing the parsing, and it should have the props.conf file, not the UF. Also, keep in mind that the first capture group of LINE_BREAKER is discarded. It is intended to capture the filler characters that occur between distinct events. If you would like to keep "platform":"ArcodaSAT"} as part of the first event, then it should not be in a capture group. Try this: LINE_BREAKER = \"platform\"\:\"ArcodaSAT\"\}() For SHOULD_LINEMERGE, this would be better set as FALSE unless you would like events to be recombined to make bigger events. If your LINE_BREAKER above works well to separate distinct events, then SHOULD_LINEMERGE should be false SHOULD_LINEMERGE = false
A temporary workaround that worked for us was setting current_only to 1 and restarting the forwarder.... Splunk-wineventlog.exe still crashes and restarts, but it does at least read some events and ... See more...
A temporary workaround that worked for us was setting current_only to 1 and restarting the forwarder.... Splunk-wineventlog.exe still crashes and restarts, but it does at least read some events and send them before it does.