All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently I have set up an alert to be triggered from Splunk Enterprise and notified in a group channel of Slack. Just curious to know is it possible and is there a way to send the visualization ima... See more...
Currently I have set up an alert to be triggered from Splunk Enterprise and notified in a group channel of Slack. Just curious to know is it possible and is there a way to send the visualization images of a respect query results from Splunk to Slack channel as alert notification. If so it will be very easy and have more readability for the users instead of checking the dashboard or clicking on the results of the alert query. Could anyone please help me on this?
I was trying to build an add-on using splunk add-on builder. We need to use api key to authenticate into a third party service. The question is how does splunk add-on builder store password? I create... See more...
I was trying to build an add-on using splunk add-on builder. We need to use api key to authenticate into a third party service. The question is how does splunk add-on builder store password? I created a password field and saw that the password.conf in local folder is being updated. Does this mean that password fields are encrypted and stored in password.conf?  Any help will be really appreciated.
Hi All, I want to get  Archived data from Frozen buckets for a certain time frame. The index which i am trying to fetch is related to windows event logs. Is their any script available to achive t... See more...
Hi All, I want to get  Archived data from Frozen buckets for a certain time frame. The index which i am trying to fetch is related to windows event logs. Is their any script available to achive this in clustered environment. Help in this is much appreciated! Regards, Sanglap
Hi, i have database inputs configured in splunk with 2 sources on 1 server. Issue is i can see data from one source but no data from 2 nd source on same server. I am unaware of this database  inp... See more...
Hi, i have database inputs configured in splunk with 2 sources on 1 server. Issue is i can see data from one source but no data from 2 nd source on same server. I am unaware of this database  inputs configurations. How do I troubleshoot this issue??  
Hi all, I design a dashboard with 3 panels and I hope to set different color to the background or headline of each panel to be able to read clearly. Is there any way to assign each Panel into diffe... See more...
Hi all, I design a dashboard with 3 panels and I hope to set different color to the background or headline of each panel to be able to read clearly. Is there any way to assign each Panel into different color in XML? Thanks
Hi All, I want to extract the dates for last 1 month where there is no-traffic in my application using splunk query. Please anyone help on this.
Hello Splunk Enthusiast, Let say I have an index that contains our player base, their gamer scores, their global rank, and the region they play the game in. In separate search, I calculated the p... See more...
Hello Splunk Enthusiast, Let say I have an index that contains our player base, their gamer scores, their global rank, and the region they play the game in. In separate search, I calculated the percentage of our players based on the region. The results came back that 70% of our users play in NA, 20% of our users play in SA, and10% of our users play in EU.  I want to create a query that looks at the index and returns the top users based on their global rank, region, and the percentage of our player base. Example: Lets say the index ranks 2000 users and I want to see the top 100 users. Then 70% of those users must come from NA, 20% must come from SA, 10% players in EU. An additional condition is that this must return the top 100 users regardless of their global rank. If the global_rank of the best SA user is rank 75 and NA users are rank 1-74, then the query must still return the top 70 NA users and whatever global rank the top users from the SA region fall at. Is there a simple way I can get these values using a combination of stats, top, etc? 
Hi All, we are unable to see the indexers internal logs in _internal index, except mongodb logs. we verified that the input configuration is present in default inputs.conf but while checking in spl... See more...
Hi All, we are unable to see the indexers internal logs in _internal index, except mongodb logs. we verified that the input configuration is present in default inputs.conf but while checking in splunkd.log there is no TailReader process logs, the only logs which is related to seekptr (generally seen when there is rollover of the log file). We even tried configuring inputs.conf with different index and sourcetype for splunkd.log but it didn't worked. Also there are almost half of the UFs stopped reporting to indexers. there are 6 indexers in cluster. Any idea about the issue will be very helpful.   Thanks, Bhaskar    
Hi,  I have a existing dlp data model, Can we add the indexed dlp data to exisiting one to make a cim compliant OR we need to create a new datamodel  to add the data ?
Dear fellow Splunkthusiasts! I have found out one of old scheduled searches in my installation is failing with this error message: Invalid value "+18y@y" for time term 'latest' Looking closer, ... See more...
Dear fellow Splunkthusiasts! I have found out one of old scheduled searches in my installation is failing with this error message: Invalid value "+18y@y" for time term 'latest' Looking closer, it turned out the search fails with any value beyond  latest=01/19/2038:04:14:07 . I have noticed this value as expiration date for perpetual licenses as well. I understand this is the maximum time that could be represented by four-byte signed integer as a number of seconds since 1970-01-01 00:00:00. My question is: how do I specify - using time modifiers in SPL - that my time range includes future with no upper limit? I don't want to hard-code the above-mentioned time into my search, as that limit may (and surely will) change in the future, not to mention it is not very self-explanatory.
Hi, facing an issue in manufacturing related to high CPU usage caused by security tools. To address this issue, we need to investigate the specific process that is responsible for the high CPU usag... See more...
Hi, facing an issue in manufacturing related to high CPU usage caused by security tools. To address this issue, we need to investigate the specific process that is responsible for the high CPU usage. Therefore, we are seeking a search method or tool that can help me with this investigation.
I created the field alias with read access given to everyone but still I am not able to see it, could someone please help here?
I have the following queries:          index=myIndex app_name IN (my-app-a, my-app-b) process=end | eval app_name = replace(app_name, "-[ab]$", "") | where match(status, "^[45][0-9]{2}$") AND i... See more...
I have the following queries:          index=myIndex app_name IN (my-app-a, my-app-b) process=end | eval app_name = replace(app_name, "-[ab]$", "") | where match(status, "^[45][0-9]{2}$") AND in(status, "500", "503", "504") | timechart count by status index=myIndex method!=GET process="start" app_name IN (my-app-a, my-app-b) process=end | eval app_name=replace(app_name, "-[ab]$", "") | timechart count | timechart per_second(*)     Where the first query returns the numbers of errors over time and the second query the requests per second   Even if there are no errors, it should paint a graph with 0 and still include the requests per second. The end goal is to be able to compare the requests per second/error ratio     How can I combine these two into a single chart with two separate graphs? My best attempt :  index=myIndex app_name IN (my-app-a, my-app-b) process=end | eval app_name = replace(app_name, "-[ab]$", "") | where match(status, "^[45][0-9]{2}$") AND in(status, "500", "503", "504") | timechart span=1h count as error_count | append [search index=myIndex app_name IN (my-app-a, my-app-b) process=end | eval app_name=replace(app_name, "-[ab]$", "") | timechart span=1h count as requests_per_hour | fields _time, requests_per_hour] | stats sum(error_count) as error_count sum(requests_per_hour) as requests_per_hour by _time | sort -requests_per_hour   Is there any other way to do this?
Hi there, I am having some trouble matching patterns from a search string using the rex command. I will show the message I am trying to search on, as well as several rex statements that I am using t... See more...
Hi there, I am having some trouble matching patterns from a search string using the rex command. I will show the message I am trying to search on, as well as several rex statements that I am using to find and extract certain bits of data (denoted by asterisks) into fields that I use in a table statement. rex statements matching wildcards populated by digits works fine, but I'm not able to match and extract data matching asterisks when they  are within quotes even if I escape them. | search Message="Error in breakfast table *, table name \"*\". The quick brown fox jumped over the lazy dog. The maximum length of the \"*\" data is currently set to * hotdogs, but the bun length is * inches. Increase the maximum length of the \"*\" bun to at least * inches and retry.*" | rex "Error in breakfast table (?<breakfast_table>\d+)" | rename breakfast_table as "BT" | rex "table name \"(?<table_name>[^\"]*)\"" | rename table_name as "TN" | rex "maximum length of the \"(?<max_bunlength>[^\"]*)\"" | rename max_bunlength as "MB" | rex "data is currently set to (?<current_length>\d+)" | rename current_length as "Current Length"   I am able to pattern match correctly on asterisks because they just represent number values. I am having trouble with asterisks within double quotes. For example, a real message may show "AB" or "Z" but this line will not match it, even though I have confirmed on regex101 that it should be matching the letters AB or Z correctly -> | rex "table name \"(?<table_name>[^\"]*)\"" | rename table_name as "TN"   Any suggestions on this?
I'm looking over vulnerability scan data and have the _time field formatted as    | eval Last_Scanned = strftime(time, "%F")   How can I created a search to show hosts(events) that have not b... See more...
I'm looking over vulnerability scan data and have the _time field formatted as    | eval Last_Scanned = strftime(time, "%F")   How can I created a search to show hosts(events) that have not been scanned within two weeks of the current date?
Splunk versions being used: Splunk Server 1 = Version 8.2.7 Splunk Server 2 = Version 8.1.1 Background: I created a service which monitors different folders for incoming data. When a user place... See more...
Splunk versions being used: Splunk Server 1 = Version 8.2.7 Splunk Server 2 = Version 8.1.1 Background: I created a service which monitors different folders for incoming data. When a user places new data into one of the folders, Splunk SPL creates a tailored message for the Virtual Machine (VM) to move the data. If message is received by VM, the VM sends the data through a TCP port to another service which verifies the data. If verified, the data will go straight to the Splunk servers via TCP/IP. Issue:  When I updated one of the servers, my whole data transfer process described above stopped working. My services cannot communicate with the VM anymore. Error Generated in Splunk Logs: 1. "It seems the Splunk default certificates are being used. If certificate validation is turned on using the default certificated (not recommended), results in loss of communication in mixed-version Splunk upgrades." 2.  "Splunk's properly implemented crypto code resulted in the ciphertext being rejected instead of decrypted when AAD validation failed." Summary of Question: Since two different versions of Splunk are running, is it affecting the Splunk SPL ability to send a message to the VM/why my VM is not communicating/working anymore? Thank you.
These are the 3 searches I have found, but I need to combine them so that I can get the information all out on one search.  Also, how can I then take this and use a rest API with Azure to get the SAM... See more...
These are the 3 searches I have found, but I need to combine them so that I can get the information all out on one search.  Also, how can I then take this and use a rest API with Azure to get the SAML Group real name? This search gives indexes attached to roles | rest /services/authorization/roles | table title srchIndexesAllowed This search gives you SAML ID and Roles | rest /services/admin/SAML-groups | table title roles | rename title as SAML This search has roles to indexes | rest /services/authentication/users | mvexpand roles | table roles | join roles [ rest /services/authorization/roles | rename title as roles | search srchIndexesAllowed=* | table roles srchIndexesAllowed] | rename roles as Roles, srchIndexesAllowed as "Indexes this Role has access" | dedup Roles
We are ingesting Microsoft tracelogs on Splunk using a Splunk Addon - "Splunk Addon for Microsoft Office 365 Reporting Web Service" . Addon link -https://splunkbase.splunk.com/app/3720 The Addon l... See more...
We are ingesting Microsoft tracelogs on Splunk using a Splunk Addon - "Splunk Addon for Microsoft Office 365 Reporting Web Service" . Addon link -https://splunkbase.splunk.com/app/3720 The Addon leverages the credentials from an Application registered on Azure AD and tries to connect to below URL to pull the trace logs from Microsoft reporting webservices. It polls the API every 5 mins. - URL -: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2023-04-13T11%3A40%3A35.692020Z'%20and%20EndDate%20eq%20datetime'2023-04-13T12%3A40%3A35.692020Z'&$skiptoken=1999 Since Apr,6th we observed a sigificant reduction in the log volume on Splunk and to give some stats the app used to pull 50-60 lakhs of events per day which has come down to 1-2 lakh events per day. On further investigation we found some error(401 Client Error) in the logs on Splunk App and it looks like its failing to connect to the URL intermittently leading to drop in the events . PFB error details – requests.exceptions.HTTPError: 401 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2023-04-13T11%3A40%3A35.692020Z'%20and%20EndDate%20eq%20datetime'2023-04-13T12%3A40%3A35.692020Z'&$skiptoken=1999 We have also tried to configure the messagetrace inputs on another Addon - "Splunk Add-on for Microsoft Office 365" and have seen the same errors (401 Client Error) . We have also validated the credentials (Tenant id , Client id and Secret Key) and all looks good . It looks like this issue was identified by other people as well and we can see this was added as known issue on Splunk doc- Splunk Add-on for Microsoft Office 365 . Has anyone identified a fix to it or know a workaround. ? Any help would be appreciated. 
Has anybody encountered Windows Security logs that look like this? If so, how did you guys fix it?  Thanks in advance.