All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My requirement is to notify when the job runs more than the specified time, condition 1 - the first job of every day should run less than 45mins, if exceeds than 45mins, trigger an alert conditio... See more...
My requirement is to notify when the job runs more than the specified time, condition 1 - the first job of every day should run less than 45mins, if exceeds than 45mins, trigger an alert condition 2 - Rest of all jobs of all days should not exceed 10 mins, if exceeds 10 mins, trigger an alert  condition 3 - Is these jobs does not run every 15 mins (job needs to start its run for every 15 mins), need to trigger an alert
Hello, I am new to learning Splunk and I have installed Splunk app for aws in Splunk instance and I have configured "aws add on input" cloudwatch as source which is pulling up various resource logs... See more...
Hello, I am new to learning Splunk and I have installed Splunk app for aws in Splunk instance and I have configured "aws add on input" cloudwatch as source which is pulling up various resource logs on Splunk dashboard in search but log data values is not coming up in Splunk app dashboard showing below message "Some panels may not be displayed correctly because the following inputs have not been configured: CloudWatch, Config, CloudTrail, Description. Or, the saved search "Addon Metadata - Summarize AWS Inputs" is not enabled on Add-on instance" Does anybody have any idea how to resolve this issue?
Hi  Having a question about opentelemetry.   We are changing our applications to support open telemetry, both trace, metrics and logs. We only use on-prem version of Splunk and so do all our cu... See more...
Hi  Having a question about opentelemetry.   We are changing our applications to support open telemetry, both trace, metrics and logs. We only use on-prem version of Splunk and so do all our custmers. I spoke with you a year ago and you then told me that Opentelemtry support would only be avilable for the cloud users. Is that still the case or have this strategy changed? Because of security it will not be possible to use cloud-version of Splunk and that also goes for all our customers.  (all our customers also have on-prem Splunk licenses).   reg. Sindre  
Hi,    How to give access to Splunk user accounts so they have visibility of cloud monitoring console. Can you help to know the exact process 
Hi Everyone, I have created multiple Dashboards  with the multiple searches. Now this is impacting splunk performance. I want to use base search for my dashboards now. Not sure how to use base ... See more...
Hi Everyone, I have created multiple Dashboards  with the multiple searches. Now this is impacting splunk performance. I want to use base search for my dashboards now. Not sure how to use base search.  Below are my queries  for one of my dashboard: <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | stats count by OrgName</query> <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" |stats count by LicenseName</query> <query> index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ TotalLicenses!=0 | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |chart sum(TotalLicenses) as "Total Licenses" sum(UnusedLicenses) as "Unused Licenses" sum(UsedLicenses) as "Used Licenses" by LicenseName <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats sum(TotalLicenses) as "Total-Licenses" sum(UsedLicenses) as "Used Licenses" sum(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses</query>   <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats latest(TotalLicenses) as "Total-Licenses" latest(UsedLicenses) as "Used Licenses" latest(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses |sort OrgName</query> <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats latest(TotalLicenses) as "Total-Licenses" latest(UsedLicenses) as "Used Licenses" latest(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses |sort OrgName</query>   I have read multiple base search documents but not working for my dashboards. Can someone guide me on this. How I can apply base search for my queries  
Hi Everyone, I need to know if is it possible to get the data via HEC from a source to two different Splunk instances?Currently, the source is sending data to one Splunk instance and I want to test... See more...
Hi Everyone, I need to know if is it possible to get the data via HEC from a source to two different Splunk instances?Currently, the source is sending data to one Splunk instance and I want to test the same on a different Splunk environment by getting the data in   Thanks
Can someone please help me with this.  I have looking for a query so that if count is less than 0 change it to 0, otherwise display actual count. for example, if the count is -23, the result shou... See more...
Can someone please help me with this.  I have looking for a query so that if count is less than 0 change it to 0, otherwise display actual count. for example, if the count is -23, the result should be count=0 and if the count is 23, the result should be count=23.
Hello All, I need help trying to generate the average response times for the below data using tstats command. Need help with the splunk query.  I am dealing with a large data and also building a vi... See more...
Hello All, I need help trying to generate the average response times for the below data using tstats command. Need help with the splunk query.  I am dealing with a large data and also building a visual dashboard to my management. So trying to use tstats as searches are faster. Stuck with unable to find avg response time using the value of Total_TT in my tstat command. When i execute the below tstat it is saying as it returned some number of events but the value is blank. Can someone help me with the query.   Sample Data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   SPL Query: | tstats values(PREFIX(total_tt:)) as AVG-RT where index=test_data sourcetype="tomcat:runtime:log" TERM(guid)
Hi All,    I have few events in splunk which will generate all the time, if those events are not generating then we should come to know that there is some issue regarding that. So we have to calcu... See more...
Hi All,    I have few events in splunk which will generate all the time, if those events are not generating then we should come to know that there is some issue regarding that. So we have to calculate the events with zero count when checking for data in last 15 mins and display the message in alert stating there are no events in last 15 minutes like that. Sample Event : {"log":"[13:18:16.761] [INFO ] [] [c.c.n.t.e.i.T.lloutEventData] [akka://Mmster/user/$b/worrActor/$rOb] - channel=\"AutoNotification\", productVersion=\"2.3.15634ab725\", apiVersion=\"A1\", uuid=\"dee45ca3-2401-13489f240eaf\", eventDateTime=\"2022-09-12T03:18:16.760Z\", severity=\"INFO\", code=\"ServiceCalloutEventData\", component=\"web.client\", category=\"integrational-external\", serviceName=\"Consume Notification\", eventName=\"MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT_REQUEST\", message=\"Schedule Job start, r\", entityType=\"MQST\",returnCode=\"null\"}   I have written query like this: index=a0_pay MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT*| rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"| rex field=log "serviceName=\"*(?<serviceName>[^\"]+)"|search eventName="MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT*" AND serviceName="Consume Notification" |stats count by eventName|where count=0|eval message="No Events Triggered for Mandate Notification retreival Callout"|table count message Not able to fetch results properly... Any other way to find and trigger the results,if there are no evets generated. Thanks in Advance
I am creating an index - configured the inputs.conf file. I have two prod servers with app logs that have the same Linux path  Additionally, I have two test servers (Non-Prod) both had the same l... See more...
I am creating an index - configured the inputs.conf file. I have two prod servers with app logs that have the same Linux path  Additionally, I have two test servers (Non-Prod) both had the same linux log paths , but different from the prod servers. Besides hard coding the servers in the inputs.conf file how does the process determine what host to collect the log data from identical paths listed in the inputs.conf some questions: Can I use the same index with prod and non prod (best practice ?) So the inputs.conf has the index=x under the log stanza name  , so that maps the inputs.conf file to collect the data and the data belongs to index=x. In the deployment I create a serverClass with all 4 servers (prod and non prod) and assign the server class to the App that has inputs.conf file.  Should I be creating separate indexes (prod and non-prod) then create separate  Apps (prod and non-prod)  then create separate ServerClasses (prod and non prod) ?  
I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed ... See more...
I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed value and display under statistics. "Subscription_S04_LookupInvoiceStatus" value present multiple times in the XML file   and Response data for the corresponding "tn" filed value, I want to query for unique one(Remove duplicates) I tried the below query, but its not pulling the response Data. Kindly help me  it would be great help   "Query I tried: index=perf-*** host=****** source=/home/JenkinsSlave/JenkinsSlaveDir/workspace/*/project/logs/*SamplerErrors.xml | eval tn=replace(tn,"\d{1}\d+","") | rex d"<responseData class=\"java\.lang\.String\">?{(?P<Response_Data1>[\w\D]+)<\/java.net.URL>" | dedup tn | stats count by tn,Response_Data1 |rex field=Response_Data1 max_match=2 "<responseData class=\"java\.lang\.String\">?{(?P<Response_Data2>[\w\D]+)<\/java.net.URL>" | eval Response_Data2=if(mvcount(Response_Data2)=2, mvindex(Response_Data2, 2), Response_Data2) XML Data: -------------------- </sample> <sample t="48" lt="0" ts="1662725857475" s="true" lb="HealthCheck_Subscription_S04_LookupInvoiceStatus_T01_LookupInvoiceStatus" rc="200" rm="Number of samples in transaction : 1, number of failing samples : 0" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="" by="465" ng="1" na="1"> <httpSample t="48" lt="48" ts="1662725858479" s="true" lb="EDI2" rc="200" rm="OK" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="text" by="465" ng="1" na="1"> <responseHeader class="java.lang.String">HTTP/1.1 200 OK Date: Fri, 09 Sep 2022 12:17:38 GMT Content-Type: application/json; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Content-Encoding: gzip </responseHeader> <requestHeader class="java.lang.String">Connection: keep-alive content-type: application/json Authorization: Bearer test_***** Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 perftest: true Content-Length: 40 Host: stage-subscription.teslamotors.com X-LocalAddress: /10.33.51.205 </requestHeader> <responseData class="java.lang.String">{"orderRefId":"****","productName":"***","country":"NL","invoiceInformation":[{"uniqueOrderId":"****","amount":**,"currency":null,"invoiceStatus":"**","dueDate":null,"cycleStartDate":"**","cycleEndDate":"*****","paymentDate":"****"}]}</responseData> <responseFile class="java.lang.String"/> <cookies class="java.lang.String"/> <method class="java.lang.String">POST</method> <queryString class="java.lang.String">{ "OrderRefId": "*****"}</queryString>
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available... See more...
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available to be retrieved when required. I understand we can utilise a frozen folder. However, would like to pull it back into our cache pre the move to smartcache. Allowing Splunk to manage it via the smartcache storage. Is there a way\method that this can be achieved?  
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone ass... See more...
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone assist please 
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to ass... See more...
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to assign sourcetype=win:syslog to index=win_syslog. I have 2 issues: 1. How I would  assign   sourcetype=win:syslog to index=winsyslog_test and index=win_syslog under this condition? 2. If I assign sourcetype=win:syslog to index=win_syslog, all of the events sourcetype=win:syslog (with index=winsyslog_test) have since January/20/2022 also show up under index=win_syslog sourcetype=win:syslog? Any help will be highly appreciated. Thank you! 
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min schedul... See more...
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min scheduled window it's working fine with no issues. could someone help me to understand what's the issue here please.
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (... See more...
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (X) to be part of it with its percentage. So, total pie chart value is 100% where the 100% represents the $value of Y and X represents 20% of it. The best query I reached is (index="A" source="*B*"  | chart values(X) over Y | transpose) however the chart represents the percentage of X & Y as if the total value of the pie chart is (X+Y) which is not the case I want.
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C... See more...
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     and i want to add      and       clause to the where section. for some reason it doesn't work. i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? and C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) ORDER BY C_CREATED_DATE_TIME ASC     and i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) AND C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     both didn't worked. any advice?
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN... See more...
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN Now I want to create new field that is like this just number AutoIncrements | eval example0 = id0 + " location:" + zone0 My challenge is, how to make that more "automatic" as I don't know the number "N" in event and want to automate this new field so for every exampleN i have the same eval example. I mean it'll be a little more complicated as I'll create some case statement in eval but inital challange is how to automate it on simpler just string scenario.
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have... See more...
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have been able to track it manually back to day of when it stopped receiving data. 
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which... See more...
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which is a shame.