All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I checked your post. the IDP certificate is a self signed certificate. So it is not chained. So no root or intermediate certificate available 
@ITWhisperer @gcusello  Do you have any ideas on how to fix this?  Thanks
Thank you for your response. I’m limited to using the Universal Forwarder (UF) only. From various resources, I’ve learned that UF can send raw data, which works for me. I’ve successfully set up Logs... See more...
Thank you for your response. I’m limited to using the Universal Forwarder (UF) only. From various resources, I’ve learned that UF can send raw data, which works for me. I’ve successfully set up Logstash to receive syslog data from my Mac using the TCP input plugin. I’m now wondering how I can configure it to receive data based on the Source or SourceType. Alternatively, is there a way to send all data to Logstash in real-time?
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) ... See more...
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) does not abide by cron syntax 100% of the time. It seems to do so at a very low frequency of occurrence, which may make repeated runs of scripted inputs difficult to notice or detect that it's occurring. I feel it's important the community is aware of this as some scripts may be time sensitive or you expect the scripts to only run once in a reporting period so no deduplicating was added -- amongst other possible circumstances a double-run might cause unintended impacts. In my case I noticed it when doing a limited deploy of a Deployment Server GUID reset script. With only 8 targeted systems scheduled to run once per day it was very easy to notice that it ran twice for multiple assets. Fortunately, I'd designed this script to create a bookmark so it wouldn't run more than once so the UF could run it every day to catch new systems or name changes, so it running more than once wouldn't cause a flood of entries in the DS. In my environment I noticed the Scheduler prematurely ran the scripts roughly 2 seconds before the cron-scheduled hour-minute combination, so the following SPL could find when this occurred: index=_internal sourcetype=splunkd reschedule_ms reschedule_ms<2000 Example: Scripted Input is scheduled to run at "2 7 * * *" (07:02:00), but there was a run at 07:01:58 AND 07:02:00. It ran two seconds early, then reschedules it to occur at the next scheduled interval which happens to be the expected time (two seconds later), and the cycle repeats. It seemed to fluctuate and vary, and sometimes even resolve itself given enough time. It's difficult to know what influences it or why after a correct run the run of milliseconds added is short by ~2000, but it does happen and may be influencing your scripts too.   Reported I opened a ticket with Support, but there's no word on a permanent resolution to this less than 1% affected forwarders issue so far, so I feel letting the community know to look if that's happening -- and if it matters in your environment, because maybe a little duplication doesn't matter -- was important.   Workarounds Changing from cron syntax to number of seconds for the Interval does work as a workaround, but that isn't always what you want -- sometimes you want a specific minute in an hour, not just hourly (whenever). Adding logic into a script to check the time is another possible workaround.
Unfortunately, I'm getting error: "Error in 'EvalCommand': The arguments to the 'searchmatch' function are invalid." I've tried both solutions.
You need to provide your raw event in a code block - use this button to open a code block and paste your raw event into it so we can see exactly what you are dealing with
Sorry.... I'm going to need to combine the policyid for both logs into one.  Both do not work..  Thanks again for your help.. Call out </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller... See more...
Sorry.... I'm going to need to combine the policyid for both logs into one.  Both do not work..  Thanks again for your help.. Call out </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller=Full Action=GetFullReportAssessment data*" OR msgTxt="API=/api/full/reportAssessment/ CallStatus=Success*") | eval msgTxt "Starting Controller=Full Action=GetFullReportAssessment data={"policyId":"Q123456789","inceptionDate":"20241011","postDate":"1900-01-01T12:00:00"}" | rex "\"policyId\":\"(?<policyId>\w+)\"" | table policyId > Response </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller=Full Action=GetFullReportAssessment data*" OR msgTxt="API=/api/full/reportAssessment/ CallStatus=Success*") | eval msgTxt "API=/api/full/reportAssessment/ CallStatus=Success Controller=full Action=GetFullReportAssessment Duration=17 data={"policyId":"Q123456789","inceptionDate":"20241015","postDate":"1900-01-01T12:00:00"} " | rex "\"policyId\":\"(?<policyId>\w+)\"" | table policyId >  
No. 1. DBConnect doesn't run on UF. DBConnect requires a "full" Splunk instance to run (some functionality runs on search-heads, some on heavy forwarders). So you couldn't have installed and run it ... See more...
No. 1. DBConnect doesn't run on UF. DBConnect requires a "full" Splunk instance to run (some functionality runs on search-heads, some on heavy forwarders). So you couldn't have installed and run it on UF. As simple as that 2. Just because something is called "TCP Output" in Splunk terminology doesn't mean it produces a simple TCP raw data stream. The outputs specified with "tcp" stanza in outputs.conf use proprietary Splunk protocol to forward data to downstream components (indexers or intermediate forwarders). Logstash doesn't know how to handle that. You could try to use HF with a syslog output to forward data to an external receiver. I think that's the only reasonable way you can think of.
OK, please find the details below   Logs below - 3 sets of Start and End. And I expected my query to provide 3 duration values. But I get ONLY 2, as observed below. 10/9/24 10:32:31.540 AM 20... See more...
OK, please find the details below   Logs below - 3 sets of Start and End. And I expected my query to provide 3 duration values. But I get ONLY 2, as observed below. 10/9/24 10:32:31.540 AM 2024-10-09T10:32:31.540+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 10/9/24 10:32:14.000 AM 2024-10-09T09:32:14.000+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 10/9/24 10:30:36.643 AM 2024-10-09T09:30:36.643+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 10/9/24 10:30:34.337 AM 2024-10-09T10:30:34.337+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 10/9/24 10:02:32.229 AM 2024-10-09T10:02:32.229+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!!   10/9/24 10:00:42.108 AM 2024-10-09T10:00:42.108+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!!   ----------------------------- ------------------------------------------------------------------------------ Durations: 117.203 110.121  
OK. You still don't answer the low-level what and how questions. From the technical point of view it's irrelevant whether you're writing a SOC tool or just a web-based demo for the customers. It's im... See more...
OK. You still don't answer the low-level what and how questions. From the technical point of view it's irrelevant whether you're writing a SOC tool or just a web-based demo for the customers. It's important what exactly do you want to "use" and how are you planning to do that. Honestly, if you have an idea what you're trying to do, it's easier for you to search for the answer on your own. The question has been asked several times before and the answer is relatively easy to find and I don't want to give it to you on a silver platter because what I think you're trying to do has some important consequences which you must understand.
Hello colleagues! Have any of you integrated Cisco Talos as an intelligence source for Splunk Enterprise Security? Can you tell me the best way to do this?
@PeaceHealthDan  Unforturnate to find no replies to your query. I too have similar issue with DBConnect app version (3.18.0). Did you find a solution to your problem? or still persists?  
Looking for props.conf / transforms.conf configuration guidance. The aim is to search logs from a HTTP Event Collector the same way we search for regular logs. Don't want to search JSON in the sea... See more...
Looking for props.conf / transforms.conf configuration guidance. The aim is to search logs from a HTTP Event Collector the same way we search for regular logs. Don't want to search JSON in the search heads. We're in the process of migrating from Splunk Forwarders to logging-operator in k8s. Thing is, Splunk Forwarder uses log files and standard indexer discovery whereas logging-operator uses stdout/stderr and must output to an HEC endpoint, meaning the logs arrive as JSON at the heavy forwarder. We want to use Splunk the same way we did over the years and want to avoid adapting alerts/dashboards etc to the new JSON source OLD CONFIG AIMED TO THE INDEXERS (using the following config we get environment/site/node/team/pod as search-time extraction fields)   [vm.container.meta] # source: /data/nodes/env1/site1/host1/logs/team1/env1/pod_name/localhost_access_log.log CLEAN_KEYS = 0 REGEX = \/.*\/.*\/(.*)\/(.*)\/(.*)\/.*\/(.*)\/.*\/(.*)\/ FORMAT = environment::$1 site::$2 node::$3 team::$4 pod::$5 SOURCE_KEY = MetaData:Source WRITE_META = true   SAMPLE LOG USING logging-operator   { "log": "ts=2024-10-15T15:22:44.548Z caller=scrape.go:1353 level=debug component=\"scrape manager\" scrape_pool=kubernetes-pods target=http://1.1.1.1:8050/_api/metrics msg=\"Scrape failed\" err=\"Get \\\"http://1.1.1.1:8050/_api/metrics\\\": dial tcp 1.1.1.1:8050: connect: connection refused\"\n", "stream": "stderr", "time": "2024-10-15T15:22:44.548801729Z", "environment": "env1", "node": "host1", "pod": "pod_name", "site": "site1", "team": "team1" }  
@ marnall, You are right. I do not have any data in my KV store that would need to be restored in the future. Upgrade to 9.3.1 has been completed without any issues!   Thanks
Please share the full SPL you ran.  The one command I provided will not return a table so we need to know how you are creating a table.
| spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message="Failed to save the shipping address. An unexpected e... See more...
| spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message="Failed to save the shipping address. An unexpected error occurred. Please try again later or contact HP Support for assistance." | table message test_rail_name
We are creating a SOC with an SIEM that we would like to implement Splunk into. We are making the Splunk dashboard but would like to use Splunk in our code. It's okay if we have Splunk running in the... See more...
We are creating a SOC with an SIEM that we would like to implement Splunk into. We are making the Splunk dashboard but would like to use Splunk in our code. It's okay if we have Splunk running in the background but we would like to pull some GUI of Splunk into our code. In short, we are creating a Splunk dashboard through python code. 
Yes. I had to download Splunk Security Essentials on my personal laptop and then safe apps it to my work laptop. Next I copied the zip file up to the secure network and was able to install the applic... See more...
Yes. I had to download Splunk Security Essentials on my personal laptop and then safe apps it to my work laptop. Next I copied the zip file up to the secure network and was able to install the application. My issues was that DISA was blocking some of the files when I downloaded from Splunk. Not sure if this helps your situation.
What do you get from this | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | table unit_test_name_failed