All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to create an alert detect when there is a VPN connecting from the outside
Hello All,  I need some help please.    I would like to query for the last upddate.  However, the field belegtyp and pdid can also change.   I need the last upddate for them all ( last upddat... See more...
Hello All,  I need some help please.    I would like to query for the last upddate.  However, the field belegtyp and pdid can also change.   I need the last upddate for them all ( last upddate when belegtyp for pdid change).  Thats my query:  | eval crdate=strptime(crdate,"%Y-%m-%d") | eval crdate=strftime(crdate,"%Y-%m-%d") | eval upddate=strptime(upddate,"%Y-%m-%d") | eval upddate=strftime(upddate,"%Y-%m-%d") | search belegnummer=177287 | stats last(upddate) by upddate crdate belegnummer belegtyp pdid   It hasn´t work so far with | sort -upddate | stats last (upddate) by ... | stats first (upddate) by...   I don't know why it doesn't work.  Hope to get some help on this, thanks in advance. 
I have  this 'Email' Data Model in ES. The model is populated by macro and tags(2 eventypes populated by saved searches) (`cim_Email_indexes`) tag=IS_Email  The two eventtypes have IS_Email tag ass... See more...
I have  this 'Email' Data Model in ES. The model is populated by macro and tags(2 eventypes populated by saved searches) (`cim_Email_indexes`) tag=IS_Email  The two eventtypes have IS_Email tag associated to them . Now,  A new source needs to be fed into the dataModel. The fields of the new source  are cim compatible but are not fed into the dataModel. And I checked the corresponding eventType and there were some tags associated to it but IS_Email tag wasn't there. So, To add the data from this new EventType into the datamodel, if I just add IS_Email tag into it(the eventtype), is it sufficient ? Or anything else is required ? If this is sufficient, then after adding the Tag, do I need to rebuild the Email DataModel  ?
Hello everyone, Thanks for reading, my english is not good at all. I have this: A B C D E F G 1 1 0 4 1 0 0 1 2 0 2 2 0 9 0 0 0 ... See more...
Hello everyone, Thanks for reading, my english is not good at all. I have this: A B C D E F G 1 1 0 4 1 0 0 1 2 0 2 2 0 9 0 0 0 1 3 0 8 0 1 0 0 4 0 9 0 0 0 9 5 0 0 0 0 0 8 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0   So, i need sum all value of each column I would like to have this: Column sum A 2 B 4 C 0 D 24 E 15 F 1 G 26
Query 1 index=ops_gtosplus trans_id="PREGATE_DOCU" application_m="GTOSPLUS_OPS_GATEGW_BW" msg_x="MSG PROCESSING | END OK" Query 2 index=ops_gtosplus trans_id="PREGATE_DOCU" application_m="GT... See more...
Query 1 index=ops_gtosplus trans_id="PREGATE_DOCU" application_m="GTOSPLUS_OPS_GATEGW_BW" msg_x="MSG PROCESSING | END OK" Query 2 index=ops_gtosplus trans_id="PREGATE_DOCU" application_m="GTOSPLUS_OPS_GOS_SB" msg_x="MSG PROCESSING | END OK" But query contain event_id. Want to know how to search records for event_id that is in query 1 but not in query 2. And need to give in 15sec allowance. For e.g. event id appear in query 1 at 2pm. then if by 2:00:15pm, the event id still does not appear in query 2, need to send out alert.
I have released an app for Splunk Enterprise. As Splunk Enterprise is kind of on-premise product and runs on customers' local host, I use file log to collect debug logs with reference to https://dev.... See more...
I have released an app for Splunk Enterprise. As Splunk Enterprise is kind of on-premise product and runs on customers' local host, I use file log to collect debug logs with reference to https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingsplunkextensions. I can read the local file and index the logs in 'search' result. Now I need to migrate the app on Splunk Cloud. How can I collect the debug log for apps on cloud. Does the link still work on Cloud? If not, is there other guides for this?
My dilemma. index=prod_s3  sourcetype=My_Sourcetype earliest=-30m (host=2016) OR (host=2018) OR (host=2015) OR (host=2017) |stats count as value by host The above query will return a count for... See more...
My dilemma. index=prod_s3  sourcetype=My_Sourcetype earliest=-30m (host=2016) OR (host=2018) OR (host=2015) OR (host=2017) |stats count as value by host The above query will return a count for each host that is ingesting, however If one of the above hosts is not ingesting, I wish to alert on that host, displaying the host name as output with a message. Any help is appreciate.  
Hello, Splunkers!!  About weeks ago, I posted a question about the errors on AIX and Solaris servers when I install the universal forwarder. But I couldn't get an answer.   Then I found a simil... See more...
Hello, Splunkers!!  About weeks ago, I posted a question about the errors on AIX and Solaris servers when I install the universal forwarder. But I couldn't get an answer.   Then I found a similar question and used that to solve the problem I have.   Below URL is my question: https://community.splunk.com/t5/Splunk-Enterprise/AIX-and-Solaris-server-error-after-installed-the-universal/m-p/583673#M11431   Below URL is the solution that I used: https://community.splunk.com/t5/Installation/Splunk-Universal-Forwarder-7-3-6-SunOS-sparc-Won-t-Install/m-p/506542       All of this,  I am still getting an error messages from Solaris and AIX.    Error messages are below, 1. Solaris Error   2. AIX Error How should I fix these problems?   Thank you in advance
Hi, I'm looking to match my list of qualys events against the list of CVEs found in the KEV lookup on cisa.gov. I'm not having any success with the below search.  Can you provide any guidance?  ... See more...
Hi, I'm looking to match my list of qualys events against the list of CVEs found in the KEV lookup on cisa.gov. I'm not having any success with the below search.  Can you provide any guidance?  For example,  index=qualys  *[|inputlookup cisa_cve.csv | fields cveID]* I need to find my events where my base search that shows CVE contains matches to the KEV lookup returned field.  Any help is greatly appreciated. Thanks  
Hi, I am trying to install an app (any) on the Splunk Cloud Trial and it asks me for my splunk.com username and password. I provide them, and I am sure those are the right ones because those are th... See more...
Hi, I am trying to install an app (any) on the Splunk Cloud Trial and it asks me for my splunk.com username and password. I provide them, and I am sure those are the right ones because those are the ones I use to write this message, but it keeps telling they are invalid. I create the account 30 mins before and I seem to be able to use it, except for downloarding apps, which is what I need. Any thoughts on this? I tried also to downloard the app from the store to upload it myself afterwards, but there is no upload app from file feature. Thank you
I am new to splunk. So I got this message that is attached when I click a link (|loadjob scheduler__hgt2_c3BsdW5rX2ludGVybmFsX21ldHJpY3M__RMD5c1adf444890fb9a1_at_1645171200_579 | head 1 | tail 1) ... See more...
I am new to splunk. So I got this message that is attached when I click a link (|loadjob scheduler__hgt2_c3BsdW5rX2ludGVybmFsX21ldHJpY3M__RMD5c1adf444890fb9a1_at_1645171200_579 | head 1 | tail 1) index=*** sourcetype=***:channel:threats* tag=malware threatInfo.analystVerdict=undefined threatInfo.incidentStatus=unresolved threatInfo.mitigationStatus=mitigated | table _time action dest user signature file_name version description Saved Search [Detections Handled by SentinelOne]: number of events (1)    I get the attached message. Can anyone explain how to resolve this?
How can i modify the alerting on Splunk Website performance monitoring to ONLY alert on sites that are actually down NOT URLs where the response code = 200 (OK). Please see the alert and alert condit... See more...
How can i modify the alerting on Splunk Website performance monitoring to ONLY alert on sites that are actually down NOT URLs where the response code = 200 (OK). Please see the alert and alert condition below.  Thank You!
hello, Please help me with the rex commands for extracting the below fields from the json data. "resourceName" : "abcd", "hostname" : "ipvalue", "environment" : "development"
Hello Splunkers!       How would one view the parameters of the indexes.conf by using a SPL statement?  The below SPL statement doesn't seem to work.  Any help is greatly appreciated!  | rest spl... See more...
Hello Splunkers!       How would one view the parameters of the indexes.conf by using a SPL statement?  The below SPL statement doesn't seem to work.  Any help is greatly appreciated!  | rest splunk_server=<hostname>/services/configs/conf-indexes | transpose
Hi, We have 3 search heads in a SHC, I am planning to deploy "Splunk_SA_CIM" in my SHC from Deployer. Question 1- Once the "Splunk_SA_CIM" is deployed in SHC members, and then for example i edit ... See more...
Hi, We have 3 search heads in a SHC, I am planning to deploy "Splunk_SA_CIM" in my SHC from Deployer. Question 1- Once the "Splunk_SA_CIM" is deployed in SHC members, and then for example i edit the "cim_Network_Traffic_indexes" macro from Search Head GUI (Search heads are behind LB) and add the firewall index in it and then accelerate the "Network Traffic" DM from GUI, Will this accelerate this DM in all 3 Search Head members and Macro too will be updated in all 3 SH members ? Question 2 - or should i make above changes in "Splunk_SA_CIM" app under "local" folder in macros.conf and datamodels.conf in deployer and push to SHC ? Question 3 - What is the correct way to manage/update datamodels config in "Splunk_SA_CIM" app like adding indexes/enabling acceleration/adding removing fields in a Search head cluster which will have Enterprise Security app installed as well in near future?
After upgrading to 8.2.4, now the Splunk Enterprise cluster is reporting this error Unable to initialize modular input "relaymodaction" defined in the app "Splunk_SA_CIM": Introspecting scheme=rela... See more...
After upgrading to 8.2.4, now the Splunk Enterprise cluster is reporting this error Unable to initialize modular input "relaymodaction" defined in the app "Splunk_SA_CIM": Introspecting scheme=relaymodaction: script running failed (PID 27150 exited with code 1)
We are working on automating the installation and configuration of Splunk DB Connect.  For the purposes of this question we are using DB Connect version 3.6.0 My question is how does the identity.d... See more...
We are working on automating the installation and configuration of Splunk DB Connect.  For the purposes of this question we are using DB Connect version 3.6.0 My question is how does the identity.dat file get generated.  We know it gets generated on a fresh DB Connect install the first time an identity is created manually.  Our issue is the DB Connect API endpoint for creating identities returns a 200 OK when creating an identity for the first time - but it does not get created and no identity.dat file is generated. If after a fresh install of DB Connect we manually though the UI add an identity - the identity.dat file is successfully generated.  We are then able to hit the endpoint to create identities and it creates them correctly. The endpoint that we are hitting is:   /servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/identities   The payload that we are uploading to the endpoint is formatted as such:   def output(self): data = {} data["name"] = self.db_identity_name data["username"] = self.db_username data["password"] = self.db_password data["disabled"] = self.disabled data["domain_name"] = self.domain_name data["use_win_auth"] = self.use_win_auth return data  
Hi,   When using lookup editor app (https://splunkbase.splunk.com/app/1724/), it allows the user to save fields with leading and trailing spaces.   Is there a plan to update the app to trim them ... See more...
Hi,   When using lookup editor app (https://splunkbase.splunk.com/app/1724/), it allows the user to save fields with leading and trailing spaces.   Is there a plan to update the app to trim them and/or alternatively is there a quick fix, apart from user training? thanks laks  @Anonymous  - Any thoughts pls? thx 
hi, I have a event ----------------------- DISK INFORMATION ---------------------------- DISK="/dev/sda" NAME="sda" HCTL="0:0:0:0" TYPE="disk" VENDOR="VMware " SIZE="50G" SCSIHOST="0" CHANNEL="0"... See more...
hi, I have a event ----------------------- DISK INFORMATION ---------------------------- DISK="/dev/sda" NAME="sda" HCTL="0:0:0:0" TYPE="disk" VENDOR="VMware " SIZE="50G" SCSIHOST="0" CHANNEL="0" ID="0" LUN="0" BOOTDISK="TRUE" DISK="/dev/sdb" NAME="sdb" HCTL="0:0:1:0" TYPE="disk" VENDOR="VMware " SIZE="500G" SCSIHOST="0" CHANNEL="0" ID="1" LUN="0" BOOTDISK="FALSE" i have mutilple DISK, NAME ETC  in a single event.. I tried below query from index | Firmware_Version="----------------------- DISK INFORMATION --------------------------*" host="abc" | extract pairdelim="{=}" kvdelim=" " | table host DISK NAME TYPE but am getting only /dev/sda.. i need /dev/sdb as well Thanks in advance
Hi The <Selection> in the bottom code is not working correctly and I can't figure out why. I am looking to select the time when I click on a bar on the graph. To give me the time of the bar, howe... See more...
Hi The <Selection> in the bottom code is not working correctly and I can't figure out why. I am looking to select the time when I click on a bar on the graph. To give me the time of the bar, however, it is always giving me the start time of the graph and not the zoomed-in time of the bar.   <panel depends="$host_token$"> <chart> <title>Sig Events Error Count by MX Component</title> <search> <query>| mstats max("mx.process.errors") prestats=true WHERE "index"="metrics_test" AND mx.env=$host_token$ AND log.type=sig-event span=60s BY "log.type" pid replica.name service.name | search "psrsvd_nx_mx.process.errors" &gt; 0 | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | timechart max("mx.process.errors") AS Error_Log_Nb by Process_Name limit=10000 | eval Error_Log_Nb=substr(Error_Log_Nb, 1, len(Error_Log_Nb)-7)</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">progressbar</option> <selection> <set token="time_token_selection.earliest">$start$</set> <!--set token="time_token_selection.latest">$end$</set--> <eval token="time_token_selection.latest">$time_token_selection.earliest$+5</eval> </selection> </chart> </panel>   IN the below we can see i am click on the bar However, the time in the tokens is the start and end, not the time on the bar that i have clicked on. Regards Robert