All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When ... See more...
After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When I try choosing a connection table, it displays the error "invalid database connection"   I also noticed the new DBX version has a keystore tab on the settings menu. (This is new and not on the previous version 3.8) I have necessary drivers installed; Splunk_JDBC_mssql version 1.1 and JRE version 11.0 Can someone assist me with what I'm missing for my connections to work?
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 proces... See more...
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 process[111]: app=test&key0=value0&key1=value1&key2=key...&keyN=valueN how to automatically/dynamically extract all keyN to fields.
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.si... See more...
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.siemens.net/. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, ... See more...
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, but not for permissive mode. This is kinda unfortunate for any use case where you want SSO to enable certain kinds of automatic access but stlil enable users to log in the old fashioned way.   My use case is automated UI testing, which is obviously a minority, but will affect all splunk app developers.  
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differentl... See more...
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differently. Each appliance has their logs ingested into Splunk to go into separate indexes (due to their purposes and location in the logical topology). Within each firewall, there are of course field values that are helpful to sort and do stats on. Now my question: I am still learning spl, reading through Exploring Splunk by Carasso, so I don't have a full understanding in all the nuances. In one search string, can I reference each index, create a table for each index, which further divides and displays that index into categories like firewall action as one field, type of request as another field, and then provide stat counts on each of those categories (how many of field 1, field 2, etc) and then also provide a total bandwidth displayed (bytes)....all this within the same table. Index FW1             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW2             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW3             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total   Can I do all that in one search string, or do i have to create a search for each index?
As the question says: can a Universal Forwarder report an internal IP? It can clearly report the external IP, but that's not useful to me.
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield... See more...
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield_10107: 3 customfield_25402: [ [+] ] customfield_25426: [ [+] ] customfield_25427: { [+] }   There are 1,049 custom fields.  I would like to use the names for the custom fields and have created a csv file with this   customfield_custom_field_number,custom_field_name customfield_10000,Request participants ... customfield_27904,Target Date   I'm trying to avoid having all the renames in props.conf.  Is there any way of taking the field name in an event and using the lookup renaming it to what is found in the lookup?
Can Splunk Enterprise 8.2.6 be upgaded to 9.1.0?
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in ... See more...
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in from IP2   The regex below works only for event LOG2: (?<user>\w+)\slogged\sin\sfrom\s(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})   Probably it doesn't match special characters, any idea to solve that? Thank you in advance!  
I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average durat... See more...
I believe there is no report Splunk cannot produce, but I'm having trouble with this one. I'd like to generate a report that compares the last 30 days average duration with last 90 days average duration and shows the increase/decrease. I am having no troubles getting the last 90 day average, but I can't figure out how to include the last 30 day average in the same query... The data I'm working with is similar to this date Job Duration 9/1/2022 Job1    33 9/1/2022 Job2   12 9/1/2022 Job3   128 9/2/2022 Job1   14 9/2/2022 Job2   99 9/2/2022 Job3   128 9/3/2022 Job1   16 9/3/2022 Job2   33 9/3/2022 Job3   22 9/4/2022 Job1  196 9/4/2022 Job2  393 9/4/2022 Job3 192 I'd like a report that looks like this.  Job          All  Days    Last 2 Days Job1        21                17 Job2       44                 35 Job3       28                 17 I can generate the ALL Days, but am not sure how to get the last 2 days.. Heres what I have. search=foo | bucket=_time span=1d | stats sum(duration) as duration by time, jobtype | stats avg(duration) as duration by jobtype Any gurus out there that can help?   
I have  following sample  XML event where I want to extract specific value for a child tag . Ex when <Order fact> value is 'Place' then extract <Orderupdatevalue> which is false <OrderUpdateField> ... See more...
I have  following sample  XML event where I want to extract specific value for a child tag . Ex when <Order fact> value is 'Place' then extract <Orderupdatevalue> which is false <OrderUpdateField>      <OrderFacts>Place</OrderFacts>      <OrderDataType>String</OrderDataType>       <OrderUpdateValue>False</OrderUpdateValue>      <OrderFacts>Number</OrderFacts>      <OrderDataType>Integer</OrderDataType>      <OrderUpdateValue>1</OrderUpdateValue>      <OrderFacts>Location</OrderFacts>     <OrderDataType>String</OrderDataType>     <OrderUpdateValue>Earth</OrderUpdateValue> </OrderUpdateField>
Hi folks, We've been using Phantom for a while now and currently implementing SAML integration. The concerning part is that the objects (assets, playbooks, permissions...) are set to ids instead of ... See more...
Hi folks, We've been using Phantom for a while now and currently implementing SAML integration. The concerning part is that the objects (assets, playbooks, permissions...) are set to ids instead of usernames, so logins via SAML generates new user ids, and we have to remap those objects form that particular local user to the current SAML user id. Is there any way to do that via REST or did anyone ever built a playbook to make that change? My idea is to rename the local users appending a "_local" to the username and ask the users to login via SSO, then have a routine that identifies SAML username = local username +"_local" and move the objects from this local id to the new SAML id. Doable?
Hi Folks, How can i display the results for 2022-09-02 in Result_Prev column and 2022-09-09 in Result column and keeping the other columns same. Calculating Percent difference from a this week to th... See more...
Hi Folks, How can i display the results for 2022-09-02 in Result_Prev column and 2022-09-09 in Result column and keeping the other columns same. Calculating Percent difference from a this week to the previous week results.  any help is appreciated. Thank you !  
I cannot seem to get more than myself and Unassigned for users in Alert Manager. If I switch from "Builtin" to "alert_manager" it will still say Both. The install info for Alert Manager mentions typi... See more...
I cannot seem to get more than myself and Unassigned for users in Alert Manager. If I switch from "Builtin" to "alert_manager" it will still say Both. The install info for Alert Manager mentions typing in alert_manager users but I have nowhere to type these. I did see an older thread from 2018 where the person mentioned inheritance and roles but I couldn't make sense of it. Any help is appreciated, thank you!    
Can someone explain the prestats option within tstats? I have reread the docs a bunch of times but just don't find a clear explanation of what it does other than it is "designed to be consumed by c... See more...
Can someone explain the prestats option within tstats? I have reread the docs a bunch of times but just don't find a clear explanation of what it does other than it is "designed to be consumed by commands that generate aggregate calculations". Below I have 2 very basic queries which are returning vastly different results.   index="myindex" sourcetype="mysource" DRA=B* | rex field=DRA "^(?<pfx>\D\S{1}).*" | timechart count by pfx useother=f limit=0 usenull=f   AND   | tstats prestats=t count where index="myindex" sourcetype="mysourcetype" DRA=B* by DRA, _time | rex field=DRA "^(?<pfx>\D\S{1}).*" | timechart count by pfx useother=f limit=0 usenull=f     I get it -- I must use prestats for the search using tstats but, then, what are the values that it IS returning without prestats? What is prestats doing?   
I am struggling to find a solution where I would like a dashboard tile to turn red if an email is not received by 11.30 every week day morning. The search behind the tile has the timeframe set to T... See more...
I am struggling to find a solution where I would like a dashboard tile to turn red if an email is not received by 11.30 every week day morning. The search behind the tile has the timeframe set to Today so I would like it to be green by default and if Splunk ingests the email before 11.30 am. It should only turn red if the email hasn't been ingested by 11.30 am. This is the base search query: index=msexchange sourcetype="MSExchange:2013:MessageTracking" subject="**** - Success" directionality=Originating psender="********" | stats dc(message_subject) Can you help?
Hello, I am currently receiving firewall data on my heavy forwarder on a specific port number. On the HF there is an simple inputs.conf with [udp://:portnumber] sourcetype=fgt_log index=fw_data... See more...
Hello, I am currently receiving firewall data on my heavy forwarder on a specific port number. On the HF there is an simple inputs.conf with [udp://:portnumber] sourcetype=fgt_log index=fw_data and an outputs.conf that sends all to the indexers. The problem is that i am receiving a lot of garbage traffic (like DNS traffic to 8.8.8.8 or 8.8.4.4). I don't want to index this data. I don't have access to the firewall so i can't just stop it there.  I thought that a blacklist would stop the events from coming. (I tried a simple blacklist like the ones used under the [monitor] stanza. Something like this: blacklist=(dstip=8\.8\.8\.8|dstip=8\.8\.4\.4|service="DNS")) Unfortunately it didn't work... I made some research but i only found the "acceptFrom" that in this situation i don't think it's useful. Came across this post but wasn't useful https://community.splunk.com/t5/Getting-Data-In/Blacklist-a-host-hosts-is-sending-logs-to-Splunk-via-TCP/m-p/289283 Any tips? 
I have a Oracle Api Gateway that I need to monitor and capture some errors. It is a very old version, actually deprecated by Oracle. There are two nodes who have the jvm agents but the information g... See more...
I have a Oracle Api Gateway that I need to monitor and capture some errors. It is a very old version, actually deprecated by Oracle. There are two nodes who have the jvm agents but the information gathered is very poor. Following my provider instructions, I could use the log extension. ¿Anyone have experience with this kind of platform? ¿Which of all logs extensions I could use for this? I'm thinking in this extension: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/log-monitoring-extension/
Hello All, I'm trying since 3 days now to find a solution for my problem but without success. I look around for solutions and already asked questions but I didn't find (or I missed it) any. I have... See more...
Hello All, I'm trying since 3 days now to find a solution for my problem but without success. I look around for solutions and already asked questions but I didn't find (or I missed it) any. I have a monitoring tool that sends me the host problems in splunk in json format. Once the alert/problem ended, the tool send me another message in splunk but with very little informations except of one field that contains a value(alert id) of the initial alert. Here a example: Initial alert: {"timestamp":1662715948,"guid":468431423,"result":1,"eid":1580,"name":"test kouzou","sev":3,"h":[{"host":"toto","name":"toto"}],"team":["titi"],"tags":-[{"tag":"App","value":"System"},{"tag":"App","value":"host-up"}]} Recovery: {"timestamp":1662716608,"guid":604699994,"result":0,"eid":1059134005,"init_eid":1580} The idea is, to make searches for alerts that have not been resolved yet and show them in a dashboard, if the alert has been acknowledged by the tool, I don't want to show it .I'm trying to make equivalency with the "init_eid" and "eid" field but without success. It is a json content and I tried to do some manual extractions in order to guarantee the "normality" of the fields.I realized also that the "transactions" command is not working at all, only for one field. Do you have any ideas on how I could achieve this goal? Thank you in advance. ​
Hi, Does the Splunk App for Amazon Connect have the ability to report/feed into Splunk when people listen or downloading recorded calls in AWS Connect? Thanks Mark