All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The Add-on for Cloudflare data app is best installed on a heavy forwarder, as it is managed using the web interface. On a heavy forwarder, install the app using Apps->"Manage Apps"->"Install app fro... See more...
The Add-on for Cloudflare data app is best installed on a heavy forwarder, as it is managed using the web interface. On a heavy forwarder, install the app using Apps->"Manage Apps"->"Install app from file", then upload the file. You can then navigate to the app using the Apps dropdown in the upper-left, then selecting the app. On the upper left you can go to Configuration, then Add-on Settings, then enter your X-auth email and key for cloudflare. Then you can go to the Inputs menu on the upper left and press "Create New Input" (in the upper right). Then you can create inputs for various data types, specifying the index and interval of collecting the logs from the Cloudflare API. Once this is done, and if your heavy forwarder can connect to cloudflare, then it should start indexing logs in sourcetypes beginning with cloudflare:* e.g. index=<yourindex> sourcetype=cloudflare:*
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address.... See more...
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address. I'll like to know if it's possible to swap the IP address used for réplication between peer member and SH communication while keeping the old one for forwarder communication  Initialy: peer 1 => 10.254.x.1, peer 2 => 10.254.x.2 After changes : Peer 1 => forwarder communication, 10.254.x.1, réplication/SH comm=> 10.254.y.1 Peer 2 => forwarder communication, 10.254.x.2, réplication/SH comm=> 10.254.y.2 I've try to use register_replication_address and register_search_address parameter in server.conf with the new address 10.254.y. but the peer and the CM, complain of duplicate guid/member. Do you have any advice on how to do this, if it's possible ?   Thanks  Frédéric 
It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:84... See more...
It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:8443/uapi There are mentions of the uapi endpoint in the file in the "JAMF Pro Add on for Splunk" app at /JAMF-Pro-addon-for-splunk/bin/uapiModels/devices.py and jamfpro.py in the same directory, so likely the app does use the Pro API as well as the classic API. However the code for jamfpro.py suggests that it uses basic authentication with username and password to obtain a bearer token, with no mention of Access Token, Client ID, or Client Secret. Thus it is likely the answer to your question about authentications is that the app only supports basic authentication.     class JamfPro: class JamfUAPIAuthToken(object): .... def get_token(self): url = self.server_url + 'api/v1/auth/token' logging.info("JSSAuthToken requesting new token") userpass = self._auth[0] + ':' + self._auth[1] encoded_u = base64.b64encode(userpass.encode()).decode() headers = {"Authorization": "Basic %s" % encoded_u} for key in self.extraHeaders: headers[key] = self.extraHeaders[key] response = self.helper.send_http_request(url="https://" + url, method="POST", headers=headers, use_proxy=self.useProxy) if response.status_code != 200: raise Exception self.unix_timestamp() + 60 self._set_token(response.json()['token'], self.unix_timestamp() + 60)  
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server pl... See more...
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server platforms. Honestly if my infrastructure wasn't already up and running on 2022 I'd downgrade to 2019.
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      tas... See more...
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      task.com.company.job2     {            duration = value2      }      task3.com.company.job1 =     {            duration = value3      } }   I did the following  | spath path=result.process_dict output=process_data | eval d_json = json(process_data), d_keys = json_keys(d_json), d_mv = json_array_to_mv(d_keys) ... | eval duration_type = ".duration" ... | eval duration = json_extract(process_data, d_mv.'duration_type') I am not able to capture the value from "duration" key. HOWEVER, if the key was just a single word (without '.'), this would work. ie.      task_com    instead of      task.com.company.job2   TIA
That was my mistake was testing out other possibilities on the "result" thinking that would help. I changed it to just "Ticket" and I received three separate email alerts, thank you!    
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic rep... See more...
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic report collection and sending?
Is there a reason you are using "$result.title$" instead of "Ticket" in the "Suppress results containing field value" field?
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type las... See more...
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type last_successful_login | eval days_since_last_login = round((now() - last_successful_login)/86400,1) | sort - days_since_last_login   Then for each one, you can use the various REST apis for knowledge objects, listed at https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTaccess e.g. for field extractions:   | rest /services/data/props/extractions splunk_server=local | search eai:acl.owner = "<nameofinactiveuser>" | table attribute author eai:acl.app eai:acl.owner stanza title updated type value   Unfortunately there is no endpoint for "all knowledge objects", so you'll have to REST call for each separate type. EDIT: nvm richgalloway found one
Really? Only 2022. I may downgrade if that's the case. I have a support ticket working with splunk and so far no luck or mention of version conflict. I may downgrade and test.
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and a... See more...
Yeah, we have 14 servers acting as our WEF environment all with the same UF version and conf  pushed out from central management/deployment. There are 6 that are Server 2016, 4 are Server 2019, and another 4 are Server 2022. Only the Server 2022 boxes have this issue. I've messed around with various .conf settings trying to bandaid it and only "current_only = 1" seems to make a difference I've packed up procmon pml and .dmp files for support to look at... dunno if there's a fix possible.... I'll post back if I hear anything.
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on w... See more...
I have setup based your suggested settings (this is actually what I was using first) however it only captures 1 event instead of the 3 that are available: I uploaded some more screenshots below on what I am experiencing and hope this makes more sense now.   trigger config sample email alert that gets generated search query shows three events
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under ... See more...
I dont have Windows server to test this out, so dont know if this works, but this its used for customizing the client behaviour, the file is deploymentclient.conf  and normally you deploy this under a dedicated app and install onto the target server. Example /my_app/local/deploymentclient.conf OR $SPLUNK_HOME/etc/system/local/deploymentclient.conf Config [deployment-client] clientName = $FQDN   (So you may be able to use a Powershell script after install of the UF and try inject that config into clientName section into the file, test on one server manually first and see, if it works) To get the FQDN via powershell Poweshell to get FQDN Name $FQDN = "$env:COMPUTERNAME.$env:USERDNSDOMAIN" Write-Output $FQDN  
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receiv... See more...
Your custom modular input script class should inherit from splunklib.modularinput But you cannot access the service object in __init__, only in stream_events() onwards as thats when your code receives the payload from Splunk to construct the Service object. You can use service object at the beginning of your stream_events(inputs, ew):    stanza = self.service.confs["app"] https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ https://docs.splunk.com/DocumentationStatic/PythonSDK/2.0.1/modularinput.html#splunklib.modularinput.Script
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect ev... See more...
I find it strange that the other Event Logs forward just fine and not crash. It's just when forwarding the "forwarded events".  We can't be the only people using windows even collectors to collect events and then forward them to splunk server.
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this... See more...
If I understand correctly, you want an alert for every unique Ticket (id) value, but every unique Ticket (id) value will be throttled for 24 hours after it triggers an alert. You can accomplish this by setting the trigger conditions: Trigger alert when: Number of Results is greater than 0 Trigger: For each result Throttle: (checked) Suppress results containing field value: Ticket Suppress triggering for: 24 hours
I had restarted the deployment server already. But the hostname remains the same as short name in the GUI.
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty cap... See more...
The LINE_BREAKER attribute requires at least one capture group and the text that matches the first capture group will be discarded and replaced with a event break.  Knowing this and that an empty capture group is allowed, try these settings:   [<sourcetype_name>] CHARSET=AUTO LINE_BREAKER = "platform":"ArcodaSAT"\}() SHOULD_LINEMERGE = false    
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Ins... See more...
Check you have created a local account splunk , group folder and set the correct permissions, ensure you follow the steps here.  https://docs.splunk.com/Documentation/Forwarder/9.0.2/Forwarder/Installanixuniversalforwarder 
Ubuntu on Windows is still Windows.  I had the same problem.  You have to use a real Linux box.