All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am receiving the logs and required query to monitor top 10 highest use CPU, Memory, processor and Disk
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before i... See more...
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before it goes through indexing. For instance in props under system I have this [source::udp:514] TRANSFORMS-null= setnull [source::tcp:514] TRANSFORMS-null= setnull And for the transforms if I want to filter out traffic going to Google DNS [setnull] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue I have tried renaming the transforms and duplicating set null with different names, however the event filtering only works on the UDP source but does not work on the TCP source. Did I miss out anything as it feels really weird that the event discarding does not work on the TCP syslog source. Any ideas, or alternatives for discarding of events from an AIO Splunk Setup? Thanks in advance  
json_extract is documented as not handling periods in the names and suggests using json_extract_exact, but it does not appear to work with an array of keys  | makeresults | fields - _time | eval spl... See more...
json_extract is documented as not handling periods in the names and suggests using json_extract_exact, but it does not appear to work with an array of keys  | makeresults | fields - _time | eval splunk_path="{\"system.splunk.path\":\"/opt/splunk/\",\"system.splunk.path2\":\"/opt/splunk/\"}" | eval paths=mvappend("system.splunk.path","system.splunk.path2") | eval extracted_path=json_extract_exact(splunk_path, "system.splunk.path") | eval extracted_path2=json_extract_exact(splunk_path, "system.splunk.path2") | eval extracted_paths=json_extract_exact(splunk_path, paths)
Here is a very simple example of "joining" two different datasets together based on their common ID. Almost all of the example is just setting up some example data. What you really need are the last ... See more...
Here is a very simple example of "joining" two different datasets together based on their common ID. Almost all of the example is just setting up some example data. What you really need are the last 3 lines. If you paste this to a search window it will randomly return you some results if the PRODUCT contains MISMATCH - if you remove the last line of the example you will all results of the made up data. | makeresults | fields - _time ``` Make some data for Sourcetype=autos ``` | eval sourcetype="autos" | eval MAKE=split("Audi,Porsche,Mercedes",",") | mvexpand MAKE | eval MODEL=case(MAKE="Audi", split("AU-123,AU-988", ","), MAKE="Porsche", split("PO-123,PO-988", ","), MAKE="Mercedes", split("MX-123,MX-988", ",")) | mvexpand MODEL | eval VIN=case(MAKE="Audi", split("AU-VIN:12345678,AU-VIN:9876543", ","), MAKE="Porsche", split("PO-VIN:12345678,PO-VIN:9876543", ","), MAKE="Mercedes", split("MX-VIN:12345678,MX-VIN:9876543", ",")) | mvexpand VIN | eval VIN=MODEL.":".VIN ``` Make some identical data for Sourcetype=autos ``` | append [ | makeresults | fields - _time | eval sourcetype="cars" | eval MANUFACTURER=split("Audi,Porsche,Mercedes",",") | mvexpand MANUFACTURER | eval PRODUCT=case(MANUFACTURER="Audi", split("AU-123,AU-988", ","), MANUFACTURER="Porsche", split("PO-123,PO-988", ","), MANUFACTURER="Mercedes", split("MX-123,MX-988", ",")) | mvexpand PRODUCT | eval SN=case(MANUFACTURER="Audi", split("AU-VIN:12345678,AU-VIN:9876543", ","), MANUFACTURER="Porsche", split("PO-VIN:12345678,PO-VIN:9876543", ","), MANUFACTURER="Mercedes", split("MX-VIN:12345678,MX-VIN:9876543", ",")) | mvexpand SN | eval SN=PRODUCT.":".SN | eval PRODUCT=PRODUCT.if(random() % 100 < 10, "-MISMATCH", "") ] ``` Take the common field ``` | eval COMMON_ID=if(sourcetype="autos", VIN, SN) | stats values(*) as * by COMMON_ID | where MAKE!=MANUFACTURER OR MODEL!=PRODUCT Don't ever consider JOIN as the first option - it's not a Splunk way of doing things and has numerous limitations. Splunk uses stats ... BY COMMON_FIELD. Hope this helps
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I... See more...
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I can't find the relevant events in Splunk but I do find them in Entra ID and Microsoft Purview dashboards?  1. Is there a TA I am missing?  2. If indeed this TA is not correctly scripting this data in, do I open a support case? Or is there another custom way to his that endpoint and snag that data.  thanks, -Daniel   
@kuul13  This is a straightforward use of the chart command, see this run anywhere example | makeresults count=20 | fields - _time | eval ClientName=mvindex(split("ABC",""), random() % 3) | mvexpan... See more...
@kuul13  This is a straightforward use of the chart command, see this run anywhere example | makeresults count=20 | fields - _time | eval ClientName=mvindex(split("ABC",""), random() % 3) | mvexpand ClientName | eval ClientName="Client ".ClientName | eval apiName="retrievePayments".mvindex(split("ABCD",""), random() % 4) | chart count over ClientName by apiName This sets up some example data and then uses the chart command do to the tabling you need.
When dealing with time picker,  the addinfo command is your friend, as that will give you info_min_time and info_max_time that are the actual earliest and latest time ranges of the search, so you can... See more...
When dealing with time picker,  the addinfo command is your friend, as that will give you info_min_time and info_max_time that are the actual earliest and latest time ranges of the search, so you can use these to compute things. As for avg/min, while individual counts per minute can be wildly different, if the measurement period is 2 hours and the total count is 10,000, then the avg/hour has to be 5,000 even though the 1st hour may have been 4,000 and the second hour 6,000. Also, if the avg/min over a period of 60 minutes is 100 (i.e. a total of 6,000), then the avg/hour must be 6000, i.e. the total.   
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are bel... See more...
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are below.    index=vehicles Sourcetype=autos VIN MAKE MODEL Sourcetype=cars SN MANUFACTURER PRODUCT I'd like to search and table VIN, MAKE, MODEL, MANUFACTURER and PRODUCT where -  VIN=SN MAKE <> MANUFACTURER OR MODEL<>PRODUCT Basically, where VIN and SN match, if one or both of the other fields don't match, show me. I'm not sure if a join (VIN and SN) statement is the best approach in this case.  I've researched and found questions and answers related to searching and comparing multiple sourcetypes.  But, I've been unable to find examples that include conditions.  Any suggestions you can provide would be greatly appreciated. Thank you!
When you tried my suggestion, please tell me what happened and what still is not working.
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs... See more...
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* ( returncode=Error OR returncode=Communication_Error) | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client | table ClientName, apiName' This query parses message to extract the apinames that starts with `retrievePayments`.  And shows this kind of results ClientName  apiName Client A          retrievePaymentsA Client B          retrievePaymentsA Client C         retrievePaymentsB Client A         retrievePaymentsB   I want to see an output where my wildcard apiName are transposed and show error count for every client.  Client      retrievePaymentsA    retrievePaymentsB     retrievePaymentsC    retrievePaymentsD Client A  2                                     5                                             0                                         1 Client B  2                                     2                                             1                                         6 Client C  8                                     3                                             0                                         0 Client D  1                                     0                                            4                                         3 Any help would be appreciated.
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "R... See more...
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "Reassign Knowledge Objects" since we resolved all of those.  I am unable to find the searches (I believe they are private) but want to know why I, as an admin, am unable to manage these searches. If anything just to disable them.. Many of the users have since left our company and I need to manage their items. Please help!!!
The Add-on for Cloudflare data app is best installed on a heavy forwarder, as it is managed using the web interface. On a heavy forwarder, install the app using Apps->"Manage Apps"->"Install app fro... See more...
The Add-on for Cloudflare data app is best installed on a heavy forwarder, as it is managed using the web interface. On a heavy forwarder, install the app using Apps->"Manage Apps"->"Install app from file", then upload the file. You can then navigate to the app using the Apps dropdown in the upper-left, then selecting the app. On the upper left you can go to Configuration, then Add-on Settings, then enter your X-auth email and key for cloudflare. Then you can go to the Inputs menu on the upper left and press "Create New Input" (in the upper right). Then you can create inputs for various data types, specifying the index and interval of collecting the logs from the Cloudflare API. Once this is done, and if your heavy forwarder can connect to cloudflare, then it should start indexing logs in sourcetypes beginning with cloudflare:* e.g. index=<yourindex> sourcetype=cloudflare:*
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address.... See more...
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address. I'll like to know if it's possible to swap the IP address used for réplication between peer member and SH communication while keeping the old one for forwarder communication  Initialy: peer 1 => 10.254.x.1, peer 2 => 10.254.x.2 After changes : Peer 1 => forwarder communication, 10.254.x.1, réplication/SH comm=> 10.254.y.1 Peer 2 => forwarder communication, 10.254.x.2, réplication/SH comm=> 10.254.y.2 I've try to use register_replication_address and register_search_address parameter in server.conf with the new address 10.254.y. but the peer and the CM, complain of duplicate guid/member. Do you have any advice on how to do this, if it's possible ?   Thanks  Frédéric 
It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:84... See more...
It appears that the JAMF classic API uses the paths: https://server.name.here:8443/JSSResource https://server.name.here:8443/api While the JAMF Pro API uses the paths: https://server.name.here:8443/uapi There are mentions of the uapi endpoint in the file in the "JAMF Pro Add on for Splunk" app at /JAMF-Pro-addon-for-splunk/bin/uapiModels/devices.py and jamfpro.py in the same directory, so likely the app does use the Pro API as well as the classic API. However the code for jamfpro.py suggests that it uses basic authentication with username and password to obtain a bearer token, with no mention of Access Token, Client ID, or Client Secret. Thus it is likely the answer to your question about authentications is that the app only supports basic authentication.     class JamfPro: class JamfUAPIAuthToken(object): .... def get_token(self): url = self.server_url + 'api/v1/auth/token' logging.info("JSSAuthToken requesting new token") userpass = self._auth[0] + ':' + self._auth[1] encoded_u = base64.b64encode(userpass.encode()).decode() headers = {"Authorization": "Basic %s" % encoded_u} for key in self.extraHeaders: headers[key] = self.extraHeaders[key] response = self.helper.send_http_request(url="https://" + url, method="POST", headers=headers, use_proxy=self.useProxy) if response.status_code != 200: raise Exception self.unix_timestamp() + 60 self._set_token(response.json()['token'], self.unix_timestamp() + 60)  
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server pl... See more...
Yep Server 2022 was the only outlier for us. The issue was consistent across a few 9.x UF versions as well. 9.01, 9.1.0 and 9.2.1 All had the same behavior on Server 2022 but not older win server platforms. Honestly if my infrastructure wasn't already up and running on 2022 I'd downgrade to 2019.
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      tas... See more...
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      task.com.company.job2     {            duration = value2      }      task3.com.company.job1 =     {            duration = value3      } }   I did the following  | spath path=result.process_dict output=process_data | eval d_json = json(process_data), d_keys = json_keys(d_json), d_mv = json_array_to_mv(d_keys) ... | eval duration_type = ".duration" ... | eval duration = json_extract(process_data, d_mv.'duration_type') I am not able to capture the value from "duration" key. HOWEVER, if the key was just a single word (without '.'), this would work. ie.      task_com    instead of      task.com.company.job2   TIA
That was my mistake was testing out other possibilities on the "result" thinking that would help. I changed it to just "Ticket" and I received three separate email alerts, thank you!    
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic rep... See more...
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic report collection and sending?
Is there a reason you are using "$result.title$" instead of "Ticket" in the "Suppress results containing field value" field?
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type las... See more...
You can list the users using the REST API, then sort them by the number of days since last successful login:   | rest /services/authentication/users splunk_server=local | table title email type last_successful_login | eval days_since_last_login = round((now() - last_successful_login)/86400,1) | sort - days_since_last_login   Then for each one, you can use the various REST apis for knowledge objects, listed at https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTaccess e.g. for field extractions:   | rest /services/data/props/extractions splunk_server=local | search eai:acl.owner = "<nameofinactiveuser>" | table attribute author eai:acl.app eai:acl.owner stanza title updated type value   Unfortunately there is no endpoint for "all knowledge objects", so you'll have to REST call for each separate type. EDIT: nvm richgalloway found one