All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Out... See more...
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Output:                            Ticket ID Priority    Status 1234 4321 5678 P1 Closed In Progress 8765  P2  Closed   However I want to remove the record 4321 and look at all the closed tickets for Priority P1 and P2, but since it is also of P1 priority the entire record is getting removed for P1 when I use this query: ... | streamstats current= f last(History) as Status by Ticket Id | where NOT Status IN ("In Progress") | ... Output: Ticket ID Priority   Status 8765  P2  Closed   How do I only remove 4321 as it is  "In Progress" Status. Please help. Expected Output: Ticket ID Priority   Status 1234                                                  5678 P1  Closed 8765  P2  Closed
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, t... See more...
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, there is no mention of compatibility with versions prior to 7.3 Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation    
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before i... See more...
Hello I am referring to the following documentation Route and filter data - Splunk Documentation I would like to discard some syslog data coming from the firewall in my case for instance before it goes through indexing. For instance in props under system I have this [source::udp:514] TRANSFORMS-null= setnull [source::tcp:514] TRANSFORMS-null= setnull And for the transforms if I want to filter out traffic going to Google DNS [setnull] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue I have tried renaming the transforms and duplicating set null with different names, however the event filtering only works on the UDP source but does not work on the TCP source. Did I miss out anything as it feels really weird that the event discarding does not work on the TCP syslog source. Any ideas, or alternatives for discarding of events from an AIO Splunk Setup? Thanks in advance  
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I... See more...
All,  I am currently working with Splunk Add-on for Microsoft Office 365 4.5.1 on Linux. All inputs enabled and collecting. I am trying to see who approved a Privileged Identity Management event. I can't find the relevant events in Splunk but I do find them in Entra ID and Microsoft Purview dashboards?  1. Is there a TA I am missing?  2. If indeed this TA is not correctly scripting this data in, do I open a support case? Or is there another custom way to his that endpoint and snag that data.  thanks, -Daniel   
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are bel... See more...
Hi.  I've been a very basic user of Splunk for a while, but now have a need to perform more advanced searches.  I have two different sourcetypes within the same index.  Examples of the fields are below.    index=vehicles Sourcetype=autos VIN MAKE MODEL Sourcetype=cars SN MANUFACTURER PRODUCT I'd like to search and table VIN, MAKE, MODEL, MANUFACTURER and PRODUCT where -  VIN=SN MAKE <> MANUFACTURER OR MODEL<>PRODUCT Basically, where VIN and SN match, if one or both of the other fields don't match, show me. I'm not sure if a join (VIN and SN) statement is the best approach in this case.  I've researched and found questions and answers related to searching and comparing multiple sourcetypes.  But, I've been unable to find examples that include conditions.  Any suggestions you can provide would be greatly appreciated. Thank you!
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs... See more...
Hi, I am new to Splunk. I am trying to figure out how to extract count of errors per api calls made for each client. I have following query that i run : `index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* ( returncode=Error OR returncode=Communication_Error) | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client | table ClientName, apiName' This query parses message to extract the apinames that starts with `retrievePayments`.  And shows this kind of results ClientName  apiName Client A          retrievePaymentsA Client B          retrievePaymentsA Client C         retrievePaymentsB Client A         retrievePaymentsB   I want to see an output where my wildcard apiName are transposed and show error count for every client.  Client      retrievePaymentsA    retrievePaymentsB     retrievePaymentsC    retrievePaymentsD Client A  2                                     5                                             0                                         1 Client B  2                                     2                                             1                                         6 Client C  8                                     3                                             0                                         0 Client D  1                                     0                                            4                                         3 Any help would be appreciated.
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "R... See more...
Splunk version is 9.1.0.2 We are trying to resolve searches that are orphaned from the report "Orphaned Scheduled Searches, Reports, and Alerts". The list does not match the what we see under the "Reassign Knowledge Objects" since we resolved all of those.  I am unable to find the searches (I believe they are private) but want to know why I, as an admin, am unable to manage these searches. If anything just to disable them.. Many of the users have since left our company and I need to manage their items. Please help!!!
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address.... See more...
Hello, I've got a cluster with 2 peers, 1 seach head and 1 CM. All of them with a single network. Due to network change, the server are going to have an additionnal card with a new network address. I'll like to know if it's possible to swap the IP address used for réplication between peer member and SH communication while keeping the old one for forwarder communication  Initialy: peer 1 => 10.254.x.1, peer 2 => 10.254.x.2 After changes : Peer 1 => forwarder communication, 10.254.x.1, réplication/SH comm=> 10.254.y.1 Peer 2 => forwarder communication, 10.254.x.2, réplication/SH comm=> 10.254.y.2 I've try to use register_replication_address and register_search_address parameter in server.conf with the new address 10.254.y. but the peer and the CM, complain of duplicate guid/member. Do you have any advice on how to do this, if it's possible ?   Thanks  Frédéric 
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      tas... See more...
how to read nested dictionary where the keys are dotted-strings I have the following posted dictionary process_dict = {      task.com.company.job1 {            duration = value1      }      task.com.company.job2     {            duration = value2      }      task3.com.company.job1 =     {            duration = value3      } }   I did the following  | spath path=result.process_dict output=process_data | eval d_json = json(process_data), d_keys = json_keys(d_json), d_mv = json_array_to_mv(d_keys) ... | eval duration_type = ".duration" ... | eval duration = json_extract(process_data, d_mv.'duration_type') I am not able to capture the value from "duration" key. HOWEVER, if the key was just a single word (without '.'), this would work. ie.      task_com    instead of      task.com.company.job2   TIA
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic rep... See more...
We have splunk installed and the collection was happening normally, but for a few days now the collection has stopped. the forwarder is running normally. How do I solve the problem with automatic report collection and sending?
Has any one seen this issue while installing the splunk forwarder in the Freebsd 13.3 ? or any idea why we are getting this ? I am trying to install the splunk forwarder 9.0.2.   This appears to be... See more...
Has any one seen this issue while installing the splunk forwarder in the Freebsd 13.3 ? or any idea why we are getting this ? I am trying to install the splunk forwarder 9.0.2.   This appears to be your first time running this version of Splunk.   Splunk software must create an administrator account during startup. Otherwise, you cannot log in. Create credentials for the administrator account. Characters do not appear on the screen when you type in credentials.   Please enter an administrator username: admin ERROR: pid 18277 terminated with signal 11 (core dumped)
Hello, Background: I am generating alerts around our Office 365 Environment using the Content Pack for Microsoft 365. I have limited search query experience but willing to put in the time to learn ... See more...
Hello, Background: I am generating alerts around our Office 365 Environment using the Content Pack for Microsoft 365. I have limited search query experience but willing to put in the time to learn more as I go. About the Content Pack for Microsoft 365 - Splunk Documentation Trying to accomplish: Runs every 10 minutes > Trigger single alert if "id"/"Ticket" is unique for every result > Throttle for 24 hours This is just an example of my search query:   (index=Office365) sourcetype="o365:service:healthIssue" service="Exchange Online" classification=incident OR advisory status=serviceDegradation OR investigating | eventstats max(_time) as maxtime, by id | where _time = maxtime | mvexpand posts{}.description.content | mvexpand posts{}.createdDateTime | rename posts{}.description.content AS content posts{}.createdDateTime AS postUpdateTime | stats latest(content) AS Content latest(status) AS Status earliest(_time) AS _time latest(postUpdateTime) AS postUpdateTime by service, classification id isResolved | fields _time service classification id Content postUpdateTime Status isResolved | sort + isResolved -postUpdateTime | rename isResolved AS Resolved? service AS Workload id AS Ticket classification AS Classification postUpdateTime AS "Last Update"   would I need a custom trigger? and what result would be required for suppressing?   What Is happening: There could be technically be 3 events based on the search query but the alert will only send 1 email to me (with only 1 event) instead of 3 individual alert emails, with 3 separate events. I am trying to prevent the same alert being generated for the same "Ticket/ID" so if a new event happens it will trigger the alert should I be using a custom trigger? and if so what result would I suppress to prevent multiple alerts of the same "ticket/id"? Any help would be greatful!   Thank you!   
I am trying to change the host name from short name to FQDN in the deployment server gui for windows servers.I have the input.conf and server.conf already set as $decideOnStartup and fullyqualifiedna... See more...
I am trying to change the host name from short name to FQDN in the deployment server gui for windows servers.I have the input.conf and server.conf already set as $decideOnStartup and fullyqualifiedname respectively in the local app folder.The hostname doesn't change in the GUI .The search logs shows FQDN for windows servers after setting up the input and server conf as above. But the hostname in GUI remains the same as shortname. How do I change it?
Expected Output: Ticket ID   Priority   Status 1234           P1            Closed 5678   8765            P2            Closed
Hi everybody, I need to install a PHP agent for a dockerized application, the application is a CRM called SuiteCRM, I was not able to find documentation about it in the PHP section, and in the docke... See more...
Hi everybody, I need to install a PHP agent for a dockerized application, the application is a CRM called SuiteCRM, I was not able to find documentation about it in the PHP section, and in the docker section only mentions Java, .NET and NodeJS: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/install-app-server-agents/agent-management/supported-automation-tools-to-deploy-agents/docker So I want to ask is somebody know if it possible to install the PHP agent in this dockerized SuiteCRM application, and if it is possible, what kind of considerations should I be aware of. At the moment there is no orchestrator to autoscale the container. Thanks in advance.
Hi Splunkers, I have an inssue with a line breaking use case. I know it is very simple to fix, but I still have the problem, so there is something I'm not getting in the right way.  First, a little ... See more...
Hi Splunkers, I have an inssue with a line breaking use case. I know it is very simple to fix, but I still have the problem, so there is something I'm not getting in the right way.  First, a little bit of info about env. Log source: custom application Input type: File monitor Input File monitoring: via UF, so a deployed app has been deployed with a DS Final flow: Log Source with UF -> HF -> Splunk Cloud Data are ingested? Yes. Issue: once log are collected, we got a unique big log. So, we need to separate logs in different events. So I thought: Ok fine, I did a lot of custom addon, I know how do do it. By the way, I did not performed initial configuration about UF, so I check related deployed app and logs . That's the summary: Single event ends with "platform":"ArcodaSAT"} UF deployed app is very simple: it has an app.conf, an inputs,.conf and a props.conf. inputs.conf file works fine due logs are ingested from the right source Below, settings in I found in props.conf:             [<sourcetype_name>]             CHARSET=AUTO             LINE_BREAKER = (\"platform\"\:\"ArcodaSAT\"\})             SHOULD_LINEMERGE = true Observation: Regex is fine; I tested it on regex101 with a log sample and it catch fine. I tried, in the LINE_BREAKER, both using round brackets - cause documentation say that parameter use the capture group to check where new log starts - and without. Same result. SHOULD_LINEMERGE has be set both as true and false: same result Let me say again: I know this is some nonsense I'm missing, but I can't find it.
Whenever I package the splunk app, I get execute permission error because I have 744 permission for conf files but splunk expects it to be 644. With 644 permission I cannot package the app, is t... See more...
Whenever I package the splunk app, I get execute permission error because I have 744 permission for conf files but splunk expects it to be 644. With 644 permission I cannot package the app, is there any workaround for the same. Below is the screenshot of the error.  
I have a PowerShell script that needs to be ran as admin to be able to load in all of the data. It returns a .csv file that exports to the lookups folder so that we can pull out the data and use said... See more...
I have a PowerShell script that needs to be ran as admin to be able to load in all of the data. It returns a .csv file that exports to the lookups folder so that we can pull out the data and use said data. I have the script in the correct directory in the Splunk server and can see it and I can run it but I'm not getting data out of it which is making me think that the script is not being ran as an admin. I've tried a few things but can't get it to work correctly. I've come to a couple of different options for what to do here. 1. Make a managed service account that runs the script as an admin. 2. Try to configure splunkd to allow running as admin (if possible?) 3. Other recommendations? I'm relatively new to Splunk. Just trying to learn all I can and I appreciate any pointers/guidance. 
pls whats the better way to create a search query for identifying knowledge object from inactive users and cleaning it up.
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client... See more...
we are looking to confirm with the "JAMF Integrations" that this app supports the Jamf Pro API vs Classic API and that it was configured to use the API Roles and Clients with the Access Token, Client ID and Client Secret vs Basic Auth