All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We recently started working with metrics data. The application is sending metrics events with the dimensions: component, deployment_id, timestamp_seconds_from_epoch and metrics names: change_comm... See more...
We recently started working with metrics data. The application is sending metrics events with the dimensions: component, deployment_id, timestamp_seconds_from_epoch and metrics names: change_committed, release We are trying to calculate the duration between deployment_id's from the time a change was committed (change_committed) and released (release) based on timestamp_seconds_from_epoch (which is a timestamp in epoch time) We thought the transaction command would be helpful but since we arent leverage _time and instead using a custom time field called timestamp_in_epoch_time, we are having some trouble figuring out the best approach. Here is what we currently have: | mstats avg("change_committed") as change_committed prestats-true WHERE "index"="statsd" span=auto BY deployment_id | table _time deployment_id | append [ | mstats avg("release") as release prestats-true WHERE "index"="statsd" span=auto BY deployment_id | table _time deployment_id Example metrics events: change_committed:1,timestamp_seconds_from_epoch:1651096172,deployment_id:28020 release:1,timestamp_seconds_from_epoch:1651097000,deployment_id:28020 How can we track the duration of timestamp_seconds_from_epoch between a change_committed and release event for each deployment_id?
Good Morning, I'm trialing Splunk Cloud in anticipation of a purchase. I have installed Splunk Enterprise as the deployment server and universal forwarders on three servers. My clients are showing ... See more...
Good Morning, I'm trialing Splunk Cloud in anticipation of a purchase. I have installed Splunk Enterprise as the deployment server and universal forwarders on three servers. My clients are showing up in "Forwarder Management" but I can't seem to get event logs from any servers except the deployment server. I have enabled firewall ports outbound 8089 and inbound 9997 on the deployment server. These are all Server 2019 machines. I have verified inputs.conf is pointing event logs to index:wineventlog but that index locally has 0 results and about 112,000 results on the cloud server. I'm sure it's something simple I'm missing with all the moving parts. Thank you in advance!
Hi, How to increase the database metrics limit reached? We are getting the below notice: "Maximum Custom Metrics reached" I looked at this Article and it did not help: https://community.appdy... See more...
Hi, How to increase the database metrics limit reached? We are getting the below notice: "Maximum Custom Metrics reached" I looked at this Article and it did not help: https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-increase-custom-metric-limits-for-database-monitoring/ta-p/28970 Thanks ^ Edited by @Ryan.Paredez to include more info
Hii, I have a data in the Splunk table like the below image.     Arista     ConsoleRule          Host                    UnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Pass... See more...
Hii, I have a data in the Splunk table like the below image.     Arista     ConsoleRule          Host                    UnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Passed Failed FSSGVDF-BDHF Passed Failed   DGUYSFDF-BDHF Passed Passed Failed     Failed Failed DGUYSFDF-BDHF   Failed Failed DGUYSFDF-BDHF   Needed like below image  AristaConsoleRuleHostUnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Passed Failed FSSGVDF-BDHF Passed Failed Failed DGUYSFDF-BDHF Passed Passed Failed FSSGVDF-BDHF   Failed Failed DGUYSFDF-BDHF   Failed         Can anyone Please Help us, Is there any possible way to achive this.
Currently we're getting data from Azure Cloud which sends certain logs to a event hub our customer set up. then we pull the data from the eventhub just as stated in the documentation with the ms clou... See more...
Currently we're getting data from Azure Cloud which sends certain logs to a event hub our customer set up. then we pull the data from the eventhub just as stated in the documentation with the ms cloud services add-on.  our problem is now that our customer wanted to see some dashboards filled out with the incoming data. normal request we thought so we installed the microsoft azure app for splunk. there we saw nothing.  after further investigation we saw two things: - the incoming data fields are all extracted but horribly named with long strings of names - the sourcetype for all logs (around 7 different ones) is all something like xyz_eventhub which the app understandably doesn't know and can't use.    so my question is how to fix the issue of only having one sourcetype even tho the props/transforms within the cloud services add-on should extract everything perfectly. we currently think about splitting the data with help of regex and props/transforms conf into the needed sourcetypes but I'm like "why the frick doesn't it work in the first place? I mean the vendor is microsoft and not a third party no-name"   glad for any ideas guys!
Hello, I have been given a list of 40 servers in a text file, all servers are separated by commas for example: server1, server2, server3 etc I cant upload the text file to splunk and compare the ... See more...
Hello, I have been given a list of 40 servers in a text file, all servers are separated by commas for example: server1, server2, server3 etc I cant upload the text file to splunk and compare the data that way, so is there a way in the search field i can just list all the servers and search my index? I know i can do OR between each one but im sure there is a quicker way?   Thanks,   Allan
I have the logs in this way :    measures: {       API.V1.WEBS_ENTITLED_PRODUCTS: 296      success: 300    } what can be the query so that i can display the field " API.V1.WEBS_ENTITLED_PRODUC... See more...
I have the logs in this way :    measures: {       API.V1.WEBS_ENTITLED_PRODUCTS: 296      success: 300    } what can be the query so that i can display the field " API.V1.WEBS_ENTITLED_PRODUCTS" and not its value. I want the output as  "API.V1.WEBS_ENTITLED_PRODUCTS"
Hello All, How do I check, how long it took for one of the event to appear in splunk?   By the way, Solved: How do i find out how long it takes Splunk to actu... - Splunk Community didnt help.   ... See more...
Hello All, How do I check, how long it took for one of the event to appear in splunk?   By the way, Solved: How do i find out how long it takes Splunk to actu... - Splunk Community didnt help.   Thank you
Currently, Splunk cloud health is in RED. We are unable to search any query. Please help me to overcome from this circumstance. I changed the saved searches Alert conditions but even though it is n... See more...
Currently, Splunk cloud health is in RED. We are unable to search any query. Please help me to overcome from this circumstance. I changed the saved searches Alert conditions but even though it is not helping.   
Hi, I am currently facing an issue where my Splunk Universal Forwarder is able to establish connection with the Splunk Server but it is unable to port over the data from the target folder of interes... See more...
Hi, I am currently facing an issue where my Splunk Universal Forwarder is able to establish connection with the Splunk Server but it is unable to port over the data from the target folder of interest. Is there a way to trouble shoot this? A diagnostic test of index="_internal" would show that Splunk is streaming in system logs from my PC, thus proving that a link has already been established with the Splunk Server. However, trying to query using index="ForwarderText_index" (my target index for the targeted files), would yield nothing. Splunk Universal Forwarder Installation Configuration Details: Server: MyServerName Port/Management Port: 8089 (default) Target Folder: C:\Users\MyUserName\Documents\MyProject\logs\Splunk_Monitoring_Folder _______________________________________ inputs.conf location: C:\Program Files\SplunkUniversalForwarder\etc\system\local File contents: [monitor://C:\Users\cftfda01\Documents\MyProject\logs\Splunk_Monitoring_Folder\SubFolder01] disabled = false index = ForwarderText_index host = MyComputerID   _______________________________________ outputs.conf location: C:\Program Files\SplunkUniversalForwarder\etc\system\local [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server =MyServerName:9997 [tcpout-server://MyServerName:9997]      
hi When I call the lookup like below it works fine     | inputlookup test.csv     but when I use the lookup in a search I have the message below     Error in 'lookup' command: Cou... See more...
hi When I call the lookup like below it works fine     | inputlookup test.csv     but when I use the lookup in a search I have the message below     Error in 'lookup' command: Could not construct lookup     here is the lookup  what I have to do please?
Hi everyone, I have a list of id and event by day. But some days are missing for some id, now I want to fill 0 or null for the missing date to have continuous day for every id. _time id ... See more...
Hi everyone, I have a list of id and event by day. But some days are missing for some id, now I want to fill 0 or null for the missing date to have continuous day for every id. _time id value 01/04/2022 1 10 01/04/2022 2 20 01/04/2022 3 30 02/04/2022 1 15 02/04/2022 2 30 03/04/2022 3 45 04/04/2022 1 25 04/04/2022 2 45 04/04/2022 3 65 Expecting: _time id value 01/04/2022 1 10 01/04/2022 2 20 01/04/2022 3 30 02/04/2022 1 15 02/04/2022 2 30 02/04/2022 3   03/04/2022 1   03/04/2022 2   03/04/2022 3 45 04/04/2022 1 25 04/04/2022 2 45 04/04/2022 3 65   thanks a lot.
Hi Everyone, We have Ping Directory application (LDAP) running on the Linux server. We have java appAgent and machine agent installed and configured on the Linux server.  The issue is that everyday ... See more...
Hi Everyone, We have Ping Directory application (LDAP) running on the Linux server. We have java appAgent and machine agent installed and configured on the Linux server.  The issue is that everyday 1.05 AM my appAgent restarting and that JVM restart time is showing on the AppDynamic dashboard instead of the Ping Directory JVM status.  I have spoken to internal AppD team and raised a support ticket but not satisfied with their responses. I understand their argument that javaagent will not start standalone as it is part of Ping Directory JVM.  I'm trying to chase the mystery what is causing the javaagent restart at exactly 1.05 AM everyday. Appreciate any help in troubleshooting this issue.  Thanks, Anand Gulla 
¿Por qué los secretos están enmascarados en Jenkins y no en Splunk? En los logs de jenkins utilizando withcredentials las contraseñas me aparecen enmascaradas pero al consultar los jobs de splunk me ... See more...
¿Por qué los secretos están enmascarados en Jenkins y no en Splunk? En los logs de jenkins utilizando withcredentials las contraseñas me aparecen enmascaradas pero al consultar los jobs de splunk me aparece descifrada como puedo solucionar esto
Hi All, Has anybody implemented a search to detect the following use case ? https://adsecurity.org/?p=1785  Any suggestions how to write the query will be highly appreciated.  We are getting A... See more...
Hi All, Has anybody implemented a search to detect the following use case ? https://adsecurity.org/?p=1785  Any suggestions how to write the query will be highly appreciated.  We are getting AD logs in with all the necessary auditing enabled.
Alerts vs Reports on Splunk "Searches, reports and alerts" page   I want to make this query to show the number of alerts and number of reports that match exactly how it shows on the "Searches, re... See more...
Alerts vs Reports on Splunk "Searches, reports and alerts" page   I want to make this query to show the number of alerts and number of reports that match exactly how it shows on the "Searches, reports and alerts" page.     | rest /servicesNS/-/-/saved/searches <eval for type here> | stats count by type       I found this question long ago but no answer given to an exact matching number of count - https://community.splunk.com/t5/Alerting/What-is-the-difference-between-alert-and-report/m-p/368683  Woodcock mentioned this, which is a nice explanation of why there is no difference between alert and report anymore.     Originally only alerts had alert actions but customers insisted and now reports also can have alert actions so literally there is no functional difference between the two. There is now only a taxonomical difference which you are free to slice any way that you like. Settings-wise, the difference between the two now is defined in savedsearches.conf as: alert.track=1 means alert and alert.track=0 means report. That is it.       The main thing is I want to find out how Splunk is deciding whether it's alert or report on the web?
I am trying to create a table which shows 3 column error msg, errorcode, and count. my current query is pulling the errorcode/msg in one column and error count  individually instead of whole. Please ... See more...
I am trying to create a table which shows 3 column error msg, errorcode, and count. my current query is pulling the errorcode/msg in one column and error count  individually instead of whole. Please assist. my Current Query My current query           Current Output Expected Output
So I have this search looking to send emails to people logging into a legacy SH, but the map command breaks my results.    index=_audit sourcetype = audittrail action="login attempt"|eval user=us... See more...
So I have this search looking to send emails to people logging into a legacy SH, but the map command breaks my results.    index=_audit sourcetype = audittrail action="login attempt"|eval user=user.""."@gmail.com"|fields user|map search="sendemail to=$user$ subject=Please Stoping Using Old SH message="Please migrate to new SH" sendresults=true inline=true format=raw"
We are sending order numbers into Analytics from Business Transactions.  We are sending the order numbers at various stages in the order flow.  We are sending order numbers as NEW orders, and sending... See more...
We are sending order numbers into Analytics from Business Transactions.  We are sending the order numbers at various stages in the order flow.  We are sending order numbers as NEW orders, and sending the same order numbers when they reach a stage of completion, ASSIGNED. The values are coming in from 2 different business transactions, so they show up in different columns in Analytics. We are struggling to write a query to return only the order numbers that show up in NEW, that don't show up in ASSIGNED, to alert us to orders that may be stuck. My sql guy tried: select segments.userData.'NEW' from transactions where segments.userData.'NEW' is not null minus select segments.userData.'ASSIGNED' from transactions where segments.userData.'ASSIGNED' is not null ... but ADQL doesn't use 'minus' and also doesn't seem to allow two 'select' commands in a single query. Any ideas?
After installing the app, add-on, and configuring the API permissions, the M365 Usage & Adoption dashboard does not populate. Going into Search, there are no results for the Office365Services User Co... See more...
After installing the app, add-on, and configuring the API permissions, the M365 Usage & Adoption dashboard does not populate. Going into Search, there are no results for the Office365Services User Counts source. I'm wondering if maybe an API permission has moved from Office 365 Management APIs to the Microsoft Graph API? Any feedback would be appreciated.