All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to blacklist winevent code 4679 by   TaskCategory=Kerberos Service Ticket Operations.  This regex is not working.  blacklist7 = EventCode="4769" TaskCategory="\w+\s\w+\s\w... See more...
Hello, I am trying to blacklist winevent code 4679 by   TaskCategory=Kerberos Service Ticket Operations.  This regex is not working.  blacklist7 = EventCode="4769" TaskCategory="\w+\s\w+\s\w+\s\w+" Ive also tried  blacklist7 = EventCode="4769" TaskCategory="Kerberos Service Ticket Operations"
Hello, I have tried numerous configurations to get my Splunk Universal Forwarder to connect to my Splunk Enterprise instance with no luck. I am trying to forward data to my indexer located on port 33... See more...
Hello, I have tried numerous configurations to get my Splunk Universal Forwarder to connect to my Splunk Enterprise instance with no luck. I am trying to forward data to my indexer located on port 3389 with the only info in the logs reading WARN AutoLoadBalancedConnectionStrategy [136236 TcpOutEloop] - Cooked connection to ip=XX.XX.XX.XX:3389 timed out I have checked telnet with that port in both directions and the connection is successful. Any advice would be appreciated
We are using OpenShift version 4.13.24 and it is actually on the ROSA AWS managed solution. I've been looking at some metrics for the splunk-otel-collector-agent pods that we have running, and in par... See more...
We are using OpenShift version 4.13.24 and it is actually on the ROSA AWS managed solution. I've been looking at some metrics for the splunk-otel-collector-agent pods that we have running, and in particular we review kubernetes metrics with Dynatrace. The alerts I am seeing are "High CPU Throttling" which basically translates into the CPU Throttling metric being nearly at the same level, or at the same level, as the CPU Usage metric. The pods are configured for Splunk Platform For these pods, I reviewed the YAML for the running instance and we include the following configuration: - resources:     limits:       cpu: 200m       memory: 500Mi     requests:       cpu: 200m       memory: 500Mi   As a workaround I was thinking to increase the cpu value under requests (and limits), however I haven't tried this yet. Has anyone else observed high CPU throttling issues? Thank you.
Hello, I need some help. Icreate a csv file on remote server from a mysql quert. I forward the csv file from the remote server to splunk. I can read the data. The csv file is over written each day... See more...
Hello, I need some help. Icreate a csv file on remote server from a mysql quert. I forward the csv file from the remote server to splunk. I can read the data. The csv file is over written each day, it have have only 1 line of data, or multiple lines of data - it is a list of device that have gon down. If no devices are down, the the file only has the hearder, and data that says: :No Devices Down:" I  only want to see data from the file on the day the file is writtern. The challenge I have is to read only the data in the file for that day. The issue is that splunk indexes the data, so splunk retains the data over time, like I want only 1 day info from the file, but splunk has all the data indexed How can I return only the data for the day, not for all data in splunk indes? thanks, EWHolz
Hi how can I download splunk apm on premises?  FYI: I don’t want use cloud version     Thanks 
I have a key called message Inside the value are several results but I need to only extract one result in the middle of the results. Sample: message:  template: 1234abcd, eeid: 5678efgh, consumeri... See more...
I have a key called message Inside the value are several results but I need to only extract one result in the middle of the results. Sample: message:  template: 1234abcd, eeid: 5678efgh, consumerid: broker My rex is below but returns the template value but also the results for eeid  and consumerid when I only need the template value of 1234abcd. | rex field=message "template: (?<TemplateID>[^-]+)"
Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but t... See more...
Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but they are too old or something, results are invalid (e.g. I am getting back Alerts and Searches when I want only Reports). Out of 199 Reports 7 are scheduled so I can guess when they ran last. Can someone show me a search that returns Reports each with their last run date?  thanks!
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, ... See more...
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, Selvam.
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  T... See more...
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  Thanks
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for ... See more...
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for a single reference as we can just search within the field and on the parameter on the dashboard prefix/suffix with wildcards but for multiple values, which can be significant, I can not see a way of searching While I have looked at |split and In neither seem to provide what I need though that may be down to what I tried.  Example data:  Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" I need to be able to enter in any number of keys, in any order, and find any records that contain ANY of the keys - not all of them in a set order. So for the above it should return if I search for (853957) or (855183,  714062) or (272476, 714062, 855183) Is anyone able to point me towards a logical solution on this - it will be a key aspect of our use of SPLUNK to enable users to copy/paste a list of reference numbers and assess where these occur in our logs. 
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and ... See more...
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and retrieved by the script, but no solid explanation on how to do that. Can anyone provide a secure method?   Thank you
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, ... See more...
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, some of the Universal Forwarders are disconnected, and I have no logs from them in a period of time. How can I find which Universal Forwarders are disconnected? I must mention that the number of UFs is more than 400.
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign thi... See more...
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign this row as True.  finally i will return the frequency of 3 errors in 5 min (Summarize all rows==True) i succeeded in doing that in Python, but not in Splunk. i wrote the following code : | table TimeStamp,machine,totalErrors | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%3N") | eval threshold=3 | eval time_window="5m" | bucket span=5m time | sort 0 machine,time | streamstats sum(totalErrors) as cumulative_errors by machine,time | eval Occurrence = if(cumulative_errors >= 3, "True", "False") | table machine,TimeStamp,Occurrence It almost correct. row 5 supposed to be True. If we calculate the delta time between row 1 to 5 more than 5 min passed, but if we calculate the delta time between row 2 to 5 less than 5 min passed  and number of errors >=3 errors. How to change it so it will find the delta time between each row (2 to 5 , 3 to 5,.. ) for each machine ? hope you understand. i need short and simple code because i will need to do that also for 1m,2m,.. 3,5,..errors row Machine TimeStamp Occurrence 1 machine1 12/14/2023 10:12:32     FALSE 2 machine1 12/14/2023 10:12:50 FALSE 3 machine1 12/14/2023 10:13:06 TRUE 4 machine1 12/14/2023 10:13:24 TRUE 5 machine1 12/14/2023 10:17:34 FALSE 6 machine1 12/16/2023 21:01:45 FALSE 7 machine2 12/18/2023 7:53:54 False thanks, Maayan
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - a... See more...
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - action=webhook STDERR - Error sending webhook request: HTTP Error 400: Bad Request   12-19-2023 11:57:56.710 +0000 INFO sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script completed in duration=706 ms with exit code=2   12-19-2023 11:57:56.710 +0000 WARN sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script returned error code=2
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time... See more...
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time instead of time it showed correct  | eval time=strftime(1703027679.5678809, "%m/%d/%y %H:%M:%S") | table time
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this f... See more...
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this file. Please let me know how we are doing this. | sendemail sendresults=true format=csv to=\"$email$\" graceful=false message="This is a test email" subject="Test Email Check" Also , message and subject is getting truncated. I am getting message body as "This" and Subject as "Test". Please help me to know what is going wrong. Help on : Renaming the csv file. How to avoid message body and subject getting truncated. I really appreciate your help on this Regards, PNV
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] ... See more...
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] disabled = 0 index = iscore_test sourcetype = iscore_test connection_host = ip [tcp://9999] disabled = 0 index = iscore_prod sourcetype = iscore_prod connection_host = ip     But when I check the receiving ports on the indexer it only shows the 9997 (that I would like to use just for splunk internal logs)   I think there is a faster way to do this rather than set the receiving ports manually in each indexer. I already checked and the app that I created was successfully copied to the indexers.  
Hi, this app is reporting one of my private apps is not compatible with Python 3. Issue:  File path designates Python 2 library. App:TA-LoRaWAN_decoders File Path:.../bin/br_uncompress.py Issue ... See more...
Hi, this app is reporting one of my private apps is not compatible with Python 3. Issue:  File path designates Python 2 library. App:TA-LoRaWAN_decoders File Path:.../bin/br_uncompress.py Issue No. Issues 1. Error while checking the script: Can't parse /opt/splunk/etc/apps/TA-LoRaWAN_decoders/bin/br_uncompress.py: ParseError: bad input: type=1, value='print', context=(' ', (24, 8))   Any suggestions as to what the issue is?
As a Splunk SME, I'm tasked to set up the ingestion of Salesforce Marketing Cloud transactional messages into Splunk. We're currently trying to utilize HTTP event collector (HEC) for this but we coul... See more...
As a Splunk SME, I'm tasked to set up the ingestion of Salesforce Marketing Cloud transactional messages into Splunk. We're currently trying to utilize HTTP event collector (HEC) for this but we couldn't get it to work because it's giving us this error: The Marketing Cloud developer I'm working with told me that in order to resolve the above error, we need to figure out how to "verify callbacks" from our end (Splunk) https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/verifyCallback.html I need to know if there's a way to achieve that through HEC or if we need to take an entirely different approach to get the Marketing Cloud events to Splunk.
Hello, I’ve upgraded my FreeBSD server from 13.2-RELEASE to 14.0-RELEASE. Now, Splunk forwarder crashes when I try to start it. I made a clean install of the latest Splunk forwarder: same result. ... See more...
Hello, I’ve upgraded my FreeBSD server from 13.2-RELEASE to 14.0-RELEASE. Now, Splunk forwarder crashes when I try to start it. I made a clean install of the latest Splunk forwarder: same result. Any hint appreciated.     pid 8593 (splunkd), jid 0, uid 0: exited on signal 11 (no core dump - too large) pid 8605 (splunkd), jid 0, uid 0: exited on signal 11 (no core dump - too large)     edit: last lines of ktrace output 11099 splunkd NAMI "/opt/splunkforwarder/etc/system/default/authentication.conf" 11099 splunkd RET open 3 11099 splunkd CALL fstat(0x3,0x82352cf30) 11099 splunkd STRU struct stat {dev=10246920463185163261, ino=219, mode=0100600, nlink=1, uid=1009, gid=1009, rdev=18446744073709551615, atime=0, mtime=1699928544, ctime=1702914937.560528000, birthtime=1699928544, size=1301, blksize=4096, blocks=9, flags=0x800 } 11099 splunkd RET fstat 0 11099 splunkd CALL read(0x3,0x35c8bc0,0x1000) 11099 splunkd GIO fd 3 read 1301 bytes "# Version 9.1.2 # DO NOT EDIT THIS FILE! # Changes to default files will be lost on update and are difficult to …/… enablePasswordHistory = false passwordHistoryCount = 24 constantLoginTime = 0 verboseLoginFailMsg = true " 11099 splunkd RET read 1301/0x515 11099 splunkd CALL read(0x3,0x35c8bc0,0x1000) 11099 splunkd GIO fd 3 read 0 bytes "" 11099 splunkd RET read 0 11099 splunkd CALL close(0x3) 11099 splunkd RET close 0 11099 splunkd PSIG SIGSEGV SIG_DFL code=SEGV_MAPERR 11084 splunk RET wait4 11099/0x2b5b 11084 splunk CALL write(0x2,0x820c56800,0x2a) 11084 splunk GIO fd 2 wrote 42 bytes "ERROR: pid 11099 terminated with signal 11" 11084 splunk RET write 42/0x2a 11084 splunk CALL write(0x2,0x825106cf7,0x1) 11084 splunk GIO fd 2 wrote 1 byte " " 11084 splunk RET write 1 11084 splunk CALL exit(0x8)