All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed ... See more...
I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed value and display under statistics. "Subscription_S04_LookupInvoiceStatus" value present multiple times in the XML file   and Response data for the corresponding "tn" filed value, I want to query for unique one(Remove duplicates) I tried the below query, but its not pulling the response Data. Kindly help me  it would be great help   "Query I tried: index=perf-*** host=****** source=/home/JenkinsSlave/JenkinsSlaveDir/workspace/*/project/logs/*SamplerErrors.xml | eval tn=replace(tn,"\d{1}\d+","") | rex d"<responseData class=\"java\.lang\.String\">?{(?P<Response_Data1>[\w\D]+)<\/java.net.URL>" | dedup tn | stats count by tn,Response_Data1 |rex field=Response_Data1 max_match=2 "<responseData class=\"java\.lang\.String\">?{(?P<Response_Data2>[\w\D]+)<\/java.net.URL>" | eval Response_Data2=if(mvcount(Response_Data2)=2, mvindex(Response_Data2, 2), Response_Data2) XML Data: -------------------- </sample> <sample t="48" lt="0" ts="1662725857475" s="true" lb="HealthCheck_Subscription_S04_LookupInvoiceStatus_T01_LookupInvoiceStatus" rc="200" rm="Number of samples in transaction : 1, number of failing samples : 0" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="" by="465" ng="1" na="1"> <httpSample t="48" lt="48" ts="1662725858479" s="true" lb="EDI2" rc="200" rm="OK" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="text" by="465" ng="1" na="1"> <responseHeader class="java.lang.String">HTTP/1.1 200 OK Date: Fri, 09 Sep 2022 12:17:38 GMT Content-Type: application/json; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Content-Encoding: gzip </responseHeader> <requestHeader class="java.lang.String">Connection: keep-alive content-type: application/json Authorization: Bearer test_***** Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 perftest: true Content-Length: 40 Host: stage-subscription.teslamotors.com X-LocalAddress: /10.33.51.205 </requestHeader> <responseData class="java.lang.String">{"orderRefId":"****","productName":"***","country":"NL","invoiceInformation":[{"uniqueOrderId":"****","amount":**,"currency":null,"invoiceStatus":"**","dueDate":null,"cycleStartDate":"**","cycleEndDate":"*****","paymentDate":"****"}]}</responseData> <responseFile class="java.lang.String"/> <cookies class="java.lang.String"/> <method class="java.lang.String">POST</method> <queryString class="java.lang.String">{ "OrderRefId": "*****"}</queryString>
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available... See more...
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available to be retrieved when required. I understand we can utilise a frozen folder. However, would like to pull it back into our cache pre the move to smartcache. Allowing Splunk to manage it via the smartcache storage. Is there a way\method that this can be achieved?  
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone ass... See more...
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone assist please 
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to ass... See more...
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to assign sourcetype=win:syslog to index=win_syslog. I have 2 issues: 1. How I would  assign   sourcetype=win:syslog to index=winsyslog_test and index=win_syslog under this condition? 2. If I assign sourcetype=win:syslog to index=win_syslog, all of the events sourcetype=win:syslog (with index=winsyslog_test) have since January/20/2022 also show up under index=win_syslog sourcetype=win:syslog? Any help will be highly appreciated. Thank you! 
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min schedul... See more...
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min scheduled window it's working fine with no issues. could someone help me to understand what's the issue here please.
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (... See more...
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (X) to be part of it with its percentage. So, total pie chart value is 100% where the 100% represents the $value of Y and X represents 20% of it. The best query I reached is (index="A" source="*B*"  | chart values(X) over Y | transpose) however the chart represents the percentage of X & Y as if the total value of the pie chart is (X+Y) which is not the case I want.
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C... See more...
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     and i want to add      and       clause to the where section. for some reason it doesn't work. i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? and C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) ORDER BY C_CREATED_DATE_TIME ASC     and i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) AND C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     both didn't worked. any advice?
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN... See more...
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN Now I want to create new field that is like this just number AutoIncrements | eval example0 = id0 + " location:" + zone0 My challenge is, how to make that more "automatic" as I don't know the number "N" in event and want to automate this new field so for every exampleN i have the same eval example. I mean it'll be a little more complicated as I'll create some case statement in eval but inital challange is how to automate it on simpler just string scenario.
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have... See more...
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have been able to track it manually back to day of when it stopped receiving data. 
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which... See more...
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which is a shame.
After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When ... See more...
After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When I try choosing a connection table, it displays the error "invalid database connection"   I also noticed the new DBX version has a keystore tab on the settings menu. (This is new and not on the previous version 3.8) I have necessary drivers installed; Splunk_JDBC_mssql version 1.1 and JRE version 11.0 Can someone assist me with what I'm missing for my connections to work?
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 proces... See more...
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 process[111]: app=test&key0=value0&key1=value1&key2=key...&keyN=valueN how to automatically/dynamically extract all keyN to fields.
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.si... See more...
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.siemens.net/. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, ... See more...
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, but not for permissive mode. This is kinda unfortunate for any use case where you want SSO to enable certain kinds of automatic access but stlil enable users to log in the old fashioned way.   My use case is automated UI testing, which is obviously a minority, but will affect all splunk app developers.  
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differentl... See more...
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differently. Each appliance has their logs ingested into Splunk to go into separate indexes (due to their purposes and location in the logical topology). Within each firewall, there are of course field values that are helpful to sort and do stats on. Now my question: I am still learning spl, reading through Exploring Splunk by Carasso, so I don't have a full understanding in all the nuances. In one search string, can I reference each index, create a table for each index, which further divides and displays that index into categories like firewall action as one field, type of request as another field, and then provide stat counts on each of those categories (how many of field 1, field 2, etc) and then also provide a total bandwidth displayed (bytes)....all this within the same table. Index FW1             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW2             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW3             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total   Can I do all that in one search string, or do i have to create a search for each index?
As the question says: can a Universal Forwarder report an internal IP? It can clearly report the external IP, but that's not useful to me.
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield... See more...
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield_10107: 3 customfield_25402: [ [+] ] customfield_25426: [ [+] ] customfield_25427: { [+] }   There are 1,049 custom fields.  I would like to use the names for the custom fields and have created a csv file with this   customfield_custom_field_number,custom_field_name customfield_10000,Request participants ... customfield_27904,Target Date   I'm trying to avoid having all the renames in props.conf.  Is there any way of taking the field name in an event and using the lookup renaming it to what is found in the lookup?
Can Splunk Enterprise 8.2.6 be upgaded to 9.1.0?
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in ... See more...
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in from IP2   The regex below works only for event LOG2: (?<user>\w+)\slogged\sin\sfrom\s(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})   Probably it doesn't match special characters, any idea to solve that? Thank you in advance!  
I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average durat... See more...
I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average duration and shows the increase/decrease. I am having no troubles getting the last 90 day average, but I can't figure out how to include the last 30 day average in the same query... The data I'm working with is similar to this date Job Duration 9/1/2022 Job1    33 9/1/2022 Job2   12 9/1/2022 Job3   128 9/2/2022 Job1   14 9/2/2022 Job2   99 9/2/2022 Job3   128 9/3/2022 Job1   16 9/3/2022 Job2   33 9/3/2022 Job3   22 9/4/2022 Job1  196 9/4/2022 Job2  393 9/4/2022 Job3 192 I'd like a report that looks like this.  Job          All  Days    Last 2 Days Job1        21                17 Job2       44                 35 Job3       28                 17 I can generate the ALL Days, but am not sure how to get the last 2 days.. Heres what I have. search=foo | bucket=_time span=1d | stats sum(duration) as duration by time, jobtype | stats avg(duration) as duration by jobtype Any gurus out there that can help?