All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also mai... See more...
Hello, We are currently working with two sets of data that have similar fields. We would like to align matching events in one row (payment amount, category/source and account number) while also maintaining the values that do not match for failed processing.  Below are some screenshots of what the data looks like now in four rows, as well as what we're hoping to visualize in 3 rows. Any assistance would be greatly appreciated!   Below is our current search: index="index1" Tag="Tag1" | stats values(PaymentAmount) as PaymentAmount by PaymentChannel,AccountId,PaymentCategory,ResponseStatus,StartDT | rename AccountId as AccountNumber | rename PaymentChannel as A_PaymentChannel | rename PaymentCategory as A_PaymentCategory | rename ResponseStatus as A_ResponseStatus | rename StartDT as A_Time | append [search index="index2" sourcetype="source2" | rename PaymentAmount as M_PayAmt | eval PayAmt = tonumber(round(M_PayAmt,2)) | rex field=source "M_(?<M_Source>\w+)_data.csv" | rename "TERMINAL ID" as M_KioskID | rename "ResponseStatus" as "M_ResponseStatus" | rename "KIOSK REPORT TIME" as M_Time | eval _time =strptime(M_Time,"%Y-%m-%d %H:%M:%S.%3Q") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | stats values(PayAmt) as M_PayAmt latest(M_Time) by AccountNumber, M_Source, M_ResponseStatus,M_KioskID | rename latest(M_Time) as M_Time | table M_PayAmt,AccountNumber, M_Source, M_KioskID,M_ResponseStatus,M_Time | mvexpand M_PayAmt] | eval A_PaymentTotal = "$" + PaymentAmount | eval M_PayAmt = "$" + M_PayAmt | eval joiner = AccountNumber | table AccountNumber,A_PaymentChannel,M_KioskID,A_PaymentCategory,M_Source,A_PaymentTotal,M_PayAmt,A_ResponseStatus,M_ResponseStatus,A_Time,_Time | eval M_PayAmt=if(isnull(M_PayAmt),"Unknown",M_PayAmt) | eval A_PaymentTotal=if(isnull(A_PaymentTotal),"Unknown",A_PaymentTotal) | eval A_Time=if(isnull(A_Time), M_Time, A_Time) | eval M_Time=if(isnull(M_Time), A_Time, M_Time) | sort by M_Time desc
I've created an alert for Account Expired.  However, the triggered alert disappears when I do a splunk restart.   Is there any way to prevent this alert from disappearing?  Any config setting? ... See more...
I've created an alert for Account Expired.  However, the triggered alert disappears when I do a splunk restart.   Is there any way to prevent this alert from disappearing?  Any config setting? In case you wanted to know the alert information: -  Settings:     - Alert Type = Scheduled     - Runs every day at 23:00     - Expires 24 hours - Trigger Conditions    - Trigger alert when Number of Results is greater than 0    - Trigger Once - Trigger Action    - Add to Triggered Alerts with Severity Critical
I have a dashboard with a multiselect that is populated dynamically using a search. When "All" is selected, I'm setting a different token with "*" (I'm using the hack found here to remove "All"). My ... See more...
I have a dashboard with a multiselect that is populated dynamically using a search. When "All" is selected, I'm setting a different token with "*" (I'm using the hack found here to remove "All"). My multiselect is populated with choices dependent on other inputs. I want to make the "All" basically be all the choices, as opposed to "*", since it will go out of the scope of the available choices.   Anyway I can do that? Set a different token as all the choices every time "All" is selected?
I know this is a commonly asked question due to it's complexity, but I cannot figure out how to get emails to send via Splunk alert. I created a simple search to find a specific string and created ... See more...
I know this is a commonly asked question due to it's complexity, but I cannot figure out how to get emails to send via Splunk alert. I created a simple search to find a specific string and created an alert with the following: App: Search Permissions: Private. Owned by admin. Alert Type: Real-Time Trigger Condition: Per-Result Actions: Send email / Add to Triggered Alerts I see it being triggered, but it never sends the email. I've tried sending it to two different email addresses. One to my work email, and another to my phone as a text (phoneNumber@mms.att.net) and neither of them work. The trigger appears in the list though. I have tried multiple mail hosts in the configuration, but the current one is the default that appeared when I opened it: smtp-mail.outlook.com:587 Email security: I have tried all three options No user/pass currently configured Allowed Domains: mms.att.net Send Emails As: SplunkAlert@test.edu I've been sifting through the Splunk documentation for hours now and can't seem to get it right. Any ideas? Thanks
Hello  I'm trying to create a summary index. I scheduled a search and edited the summary index but I could not do the new search in the results that I have already obtained in the scheduled searches... See more...
Hello  I'm trying to create a summary index. I scheduled a search and edited the summary index but I could not do the new search in the results that I have already obtained in the scheduled searches 
We log job status messages in splunk.  When a job runs successfully, a success message is logged.  When a job errors out, an error message is logged.  Both types of messages include hostname as a fie... See more...
We log job status messages in splunk.  When a job runs successfully, a success message is logged.  When a job errors out, an error message is logged.  Both types of messages include hostname as a field.  But when the underlying service fails to run a job, no message is logged. I need to find hostnames that are missing success messages.  If I could use dataset literals, I might search something like this: | FROM <list of expected hostnames as dataset literal> NOT [subsearch for success message hostnames] But Splunk Cloud Platform apparently does not support the use of dataset literals, so I've resorted to a more convoluted process using stats, as suggested by several Internet authors: <search for success message hostnames> | eval expected = split("<list of expected hostnames>"," ") | stats values(hostname) as hostname by expected | where NOT match (hostname,expected) This approach works if some, but not all, expected hostnames are missing.  However, in the case where all the expected hostnames are missing the search comes back empty.  I understand why it comes back empty.  What I need is a "correct" way to find these missing hostnames that will work in all cases.
Hi all,  I was wondering if someone could help with a sort ordering issue I have. I am looking for a way to sort instance names of my computers  alphanumerically where I can sort the list like: a... See more...
Hi all,  I was wondering if someone could help with a sort ordering issue I have. I am looking for a way to sort instance names of my computers  alphanumerically where I can sort the list like: a100pc1 a100pc2 a100pc3 a100pc10 a100pc20 instead of lexicographically like:   a100pc1  a100pc10  a100pc2  a100pc20  a100pc3
Hi Im developing an app that supplies a scripted input to Splunk. When its run (on linux machines), it reads the session key from stdin :     session_key = sys.stdin.readlines()[0]     ... See more...
Hi Im developing an app that supplies a scripted input to Splunk. When its run (on linux machines), it reads the session key from stdin :     session_key = sys.stdin.readlines()[0]     This does not seem to work for Windows based deployments   Does anyone have an idea of how to do this on windows?
In Splunk ES we have correlation searches creating notable events. The timestamp of the notable event, and thus the timestamp of the incident in "Incident Review", is the time of when the correlation... See more...
In Splunk ES we have correlation searches creating notable events. The timestamp of the notable event, and thus the timestamp of the incident in "Incident Review", is the time of when the correlation search ran. Is there any way to change this timestamp to a custom timestamp, i.e. the time of the actual log event in Splunk that triggered the notable event? I know one solution is to make the correlation search run really often, like every minute, which would make the timestamps quite precise, but not perfect, and also this would not be optimal with regards to performance. Also, I guess we could change the default time parsing of notable events in Splunk ES and add my own time field, e.g. "my_time_field", and use this field for time parsing instead, but then all out-of-the-box correlation searches in Splunk ES would stop working properly and it is in general not a good solution. We've made a temporary solution to this by adding a new "Incident Review Event Attribute" field called "Alert Time", which adds a new field to the incidents with the "real" timestamp, but it's not optimal, as the time of the incident itself is still the same. Is there any other way?  
Good Afternoon, I am attempting to create a panel that shows me the unique URIs that have been accessed by a specific IP, with counts associated with the URI. I'm trying to get it to where it tells m... See more...
Good Afternoon, I am attempting to create a panel that shows me the unique URIs that have been accessed by a specific IP, with counts associated with the URI. I'm trying to get it to where it tells me something like this: 10.20.30.40 accessed www<.>google<.>com 40 times. Here is my current query: Index=nsm | stats list(uri) by src_ip This displays what I want but with duplicates, and it provides no counts. I tried adding | dedup with it which shows everything only once, but again no count. Index=nsm | chart count by src_ip,uri This provides me the information/details of what I'm looking for, however the display is not ideal, and it doesn't show all URI's since it caps at OTHER.   Any information would be greatly appreciated
Hi, I'm trying to query tables from postgres database. All tables there are under Foreign tables and nothing under tables. when I use db connect it get the schema name but it couldn't get the list o... See more...
Hi, I'm trying to query tables from postgres database. All tables there are under Foreign tables and nothing under tables. when I use db connect it get the schema name but it couldn't get the list of tables. How can I access the foreign tables using db connect?
I have a lookup of all active credentials from tenable called tio_credentials.csv. I have a search that lists unique credentials used, like so: `tenable` `io` earliest=-15d pluginID=19506 | rex fie... See more...
I have a lookup of all active credentials from tenable called tio_credentials.csv. I have a search that lists unique credentials used, like so: `tenable` `io` earliest=-15d pluginID=19506 | rex field=plugin_output "'(?<domain>.*\\\)?(?P<Credentialed_Checks>.*)'" | stats dc(host-ip) as count by Credentialed_Checks   How do I compare the list of credentials from Splunk events with the lookup in a way that shows all the credentials in the lookup that aren't showing up in events? I'm new to splunk and trying to see if there's any credentials we can remove from our credentials list.
I have setup a SC4S and it has been connected to splunk enterprise. Also I have forwarded the logs from fortigate firewall as syslogs via port 514. (I have verified that forti logs are  this via tcpd... See more...
I have setup a SC4S and it has been connected to splunk enterprise. Also I have forwarded the logs from fortigate firewall as syslogs via port 514. (I have verified that forti logs are  this via tcpdump) From the Splunk I can see SC4S startup events as only sc4s events (source = sc4s , sourcetype = sc4s:events) which are ingested. Fortigate logs are not ingesting.  following are the current configurations.(I have installed Fortigate app in splunk and it worked properly when I directly forward fortigate logs to splunk) Created a data input(HEC) from Splunk(tested 2 but not worked), 1. index=default source type = default   2. index=netops source type = fgt_event   /opt/sc4s/env_file SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://192.168..3.46:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=4926fe93-4d91-409f-bf23-c6c67c0a880f SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no   splunk_metadata.csv fortinet_fortios_event,index,netops fortinet_fortios_event,source,fgt_event   How can I fix this issue? Appreciate your support on this. Thank You.        
Hello,  As found on "Splunk Security Advisory for Apache Log4j", I could read that "Unless CVE-2021-45105 or CVE-2021-44832 increase in severity, Splunk will address these vulnerabilities as part o... See more...
Hello,  As found on "Splunk Security Advisory for Apache Log4j", I could read that "Unless CVE-2021-45105 or CVE-2021-44832 increase in severity, Splunk will address these vulnerabilities as part of the next regular maintenance release of each affected product. Customers also have the option to remove Log4j Version 2 from Splunk Enterprise out of an abundance of caution. " CVE-2021-44832 concerns a vulnerability found in version 2.17. Thus as far as I understand, the vulnerability of Log4j 2.17 will be solved in next maintenance release. I am running Splunk Enterprise 8.1.7.2 and the version of Log4j in it is 2.16.  This version of Log4j has been deleted.  But my management is asking when the version 2.17 will be available. I believe in next maintenance release. Thus can you please tell me when the next maintenance release will be released for Splunk Enterprise 8.1? Thanks
Hi Experts, would like to check if anyone tried using certificates for the Microsoft defender add-on. how / where do I generate the certificates to upload to azure app registration. currently f... See more...
Hi Experts, would like to check if anyone tried using certificates for the Microsoft defender add-on. how / where do I generate the certificates to upload to azure app registration. currently from splunkbase im using this add on.  https://splunkbase.splunk.com/app/4959/#/details  would like to check if there is any supported version by splunk ?    
 Hi Splunker !  I want many guests to log in with a common guest account to view Splunk Enterprise (Dashboard Studio). Q1: Is there a limit to the number of sessions that can be logged in at ... See more...
 Hi Splunker !  I want many guests to log in with a common guest account to view Splunk Enterprise (Dashboard Studio). Q1: Is there a limit to the number of sessions that can be logged in at the same time with one account? Q2: If there is a limit, what is the maximum? Q3: Where to set it? * Do not  have to consider the NW limitation such as load balancer.   Just want to know the limit number on the Splunk side.
HI, I wanted to see the results for each service in one line. But I see each hour in a different line as per the below screenshot. Can you please let me know what changes need to be done to get the r... See more...
HI, I wanted to see the results for each service in one line. But I see each hour in a different line as per the below screenshot. Can you please let me know what changes need to be done to get the results in one line even though we select multiple hrs in the time while doing the search? My Search query -  index=***** | stats list(service_calls) as service_calls list(service_errors) as service_errors list(service_error_rate) as service_error_rate by service   Thanks, SG 
Can someone please shed some light on how to move a licence server between sites ? Scenario being a new deployment need to be able to failover to a new DC from the original location.  Would addition... See more...
Can someone please shed some light on how to move a licence server between sites ? Scenario being a new deployment need to be able to failover to a new DC from the original location.  Would additional certs and licences be needed ? Thanks in advance
Hi Everyone, I have created the below query in Splunk to fetch the Error messages index=abc ns=blazegateway-c2 CASE(ERROR)|rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)"|eval _time = st... See more...
Hi Everyone, I have created the below query in Splunk to fetch the Error messages index=abc ns=blazegateway-c2 CASE(ERROR)|rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")| cluster showcount=t t=0.3|table app_name, Error_Message ,cluster_count,_time, environment, pod_name,ns |dedup Error_Message| rename app_name as APP_NAME, _time as Time, environment as Environment, pod_name as Pod_Name, cluster_count as Count I observe that for particular Error message like below: [reactor-http-epoll-4,cd5411f55ef5b309d8c4bc3f558e8af2,269476b43c74118e,01] reactor.core.publisher.Operators - Operator called default onErrorDropped Count is coming as 42.Although the Event with this Error Messages are 13 only. I want to know is this the problem with cluster_count . How the cluster is working in splunk. Is my query taking cluster_count instead of actual counts. Can someone guide me on this.
hello I use a search with the structure like below in order to timechart events from 2 different search As you can see, I need to perc90 the events before doing a timechart My question concerns th... See more...
hello I use a search with the structure like below in order to timechart events from 2 different search As you can see, I need to perc90 the events before doing a timechart My question concerns the timechart  Is there a way to timechart the events without using an avg function? index=toto | search abc <=1000 | stats perc90(abc) as "titi" by _time | append [ search index=toto | search abc >= 1000 | stats perc90(abc) as "tutu" by _time ] | timechart span=1m avg("titi") as "titi", avg("tutu") as "tutu" Thanks