All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ouch. 1. If you're using numbered capture groups you don't have to name them. (I'm not even sure if index-time extractions support named capture groups). 2. Assuming your regex was right you'd get ... See more...
Ouch. 1. If you're using numbered capture groups you don't have to name them. (I'm not even sure if index-time extractions support named capture groups). 2. Assuming your regex was right you'd get a key::value pairs in your raw event. Are you sure that's what you want? Also, this will cause "interesting" side effects since that data would get split into terms at major breakers and would get indexed as indexed fields. 3. Manipulating structured data with regexes is asking for trouble. You have no guarantee that the fields will always be in the same order (and they might not always contain full data). That's why you use structured data format.
i have events that contains a specific field that sometimes contain a very long field which make the rest of the event be truncated, i want to remove this field or change it "long field detected". t... See more...
i have events that contains a specific field that sometimes contain a very long field which make the rest of the event be truncated, i want to remove this field or change it "long field detected". the problematic field call "file" and i should catch it's last appearnce, also i want the data after it so i should stop the removal after the first "," (comma). also the event contains nested fields. i've tried props.conf+transform conf like that:ete but it doesn't work. here is an example for 1 event: deleted due to security reasons 
Hi @corti77 , I don't use SC4S but usually rsyslog and a Universal Forwarder that's the same thing. So I usually (except when there are very big files to read)use the batch command, instead of moni... See more...
Hi @corti77 , I don't use SC4S but usually rsyslog and a Universal Forwarder that's the same thing. So I usually (except when there are very big files to read)use the batch command, instead of monitor command in the inputs.conf. Ciao. Giuseppe
Hi @daniel99 , did you installed the Splunk_TA_Windows ( https://splunkbase.splunk.com/app/742 ) on your Splunk? Ciao. Giuseppe  
The /raw endpoint should not need the ?auto_extract_timestamp=true parameter.
OK. Instead of creating new accounts just to post the same content which is completely pointless, the thing you (and everyone who finds this idea important) can do is log into https://ideas.splunk.co... See more...
OK. Instead of creating new accounts just to post the same content which is completely pointless, the thing you (and everyone who finds this idea important) can do is log into https://ideas.splunk.com and create or upvote a relevant idea there. If it gathers enough visibility it might get considered. Just posting random rants here won't accomplish much.
Hi, I wonder the easiest way to monitor the deletion of files/folders in a CIFS netapp using splunk. I saw an Add-on available, could someone share any experience with this use case? I have a SC4S... See more...
Hi, I wonder the easiest way to monitor the deletion of files/folders in a CIFS netapp using splunk. I saw an Add-on available, could someone share any experience with this use case? I have a SC4S in place so I thought to configure syslog in NetApp to be sent to SC4S and start digging into the logs. Is there any App I could leverage to ease the pain? many thanks  
OK. So that is interesting. I'd check then 1) If there isn't by any chance another definition pointing to the same directory (for example one index defined by means of $SPLUNK_DB and another based o... See more...
OK. So that is interesting. I'd check then 1) If there isn't by any chance another definition pointing to the same directory (for example one index defined by means of $SPLUNK_DB and another based on volume) 2) What actually consumes the disk in this directory. Just the buckets or something else? Maybe you have a lot of DAS data. Or maybe you're ingesting a lot of data with indexed extractions and have bloated idx files...
Dear Splunk, Adding my voice here, because honestly, how is this still a thing? It’s like watching a toddler grow up but refusing to wear shoes because ‘barefoot builds character.’ We’re not trying ... See more...
Dear Splunk, Adding my voice here, because honestly, how is this still a thing? It’s like watching a toddler grow up but refusing to wear shoes because ‘barefoot builds character.’ We’re not trying to strip you of your rugged charm—we’re just asking you to stop tracking mud into the data center. Look, it’s not just about convenience. A proper YUM repo means: Consistency: No more “Did we grab the right version from the website?” anxiety. Efficiency: Automation beats playing 'Where’s the Download Link?' every release. Security: Signed RPMs and authenticated repos mean we sleep better at night. (And you don’t want to mess with my sleep.) You’re a billion-dollar company, not a weekend side project. If Bob’s Discount Monitoring Software has a YUM repo, so can you. Let’s not make this a 14th-birthday discussion, or worse, a sweet sixteen. Yours in exasperation, A Sysadmin Who Just Wants to Automate
Try something like this (?ms)<Event[^>]*>.*?<System>.*?<Provider\s+Name='(?<ProviderName>[^']*)'\s+Guid='(?<ProviderGuid>[^']*)'.*?<EventID>(?<EventID>\d+)<\/EventID>.*?<Version>(?<Version>\d+)<\/Ve... See more...
Try something like this (?ms)<Event[^>]*>.*?<System>.*?<Provider\s+Name='(?<ProviderName>[^']*)'\s+Guid='(?<ProviderGuid>[^']*)'.*?<EventID>(?<EventID>\d+)<\/EventID>.*?<Version>(?<Version>\d+)<\/Version>.*?<Level>(?<Level>\d+)<\/Level>.*?<Task>(?<Task>\d+)<\/Task>.*?<Opcode>(?<Opcode>\d+)<\/Opcode>.*?<Keywords>(?<Keywords>[^<]*)<\/Keywords>.*?<TimeCreated\s+SystemTime='(?<SystemTime>[^']*)'.*?<EventRecordID>(?<EventRecordID>\d+)<\/EventRecordID>.*?<Correlation\s+ActivityID='(?<ActivityID>[^']*)'.*?<Execution\s+ProcessID='(?<ProcessID>\d+)'\s+ThreadID='(?<ThreadID>\d+)'.*?<Channel>(?<Channel>[^<]*)<\/Channel>.*?<Computer>(?<Computer>[^<]*)<\/Computer>.*?<EventData>.*?<Data\s+Name='SubjectUserSid'>(?<SubjectUserSid>[^<]*)<\/Data>.*?<Data\s+Name='SubjectUserName'>(?<SubjectUserName>[^<]*)<\/Data>.*?<Data\s+Name='SubjectDomainName'>(?<SubjectDomainName>[^<]*)<\/Data>.*?<Data\s+Name='SubjectLogonId'>(?<SubjectLogonId>[^<]*)<\/Data>.*?<Data\s+Name='TargetUserSid'>(?<TargetUserSid>[^<]*)<\/Data>.*?<Data\s+Name='TargetUserName'>(?<TargetUserName>[^<]*)<\/Data>.*?<Data\s+Name='TargetDomainName'>(?<TargetDomainName>[^<]*)<\/Data>.*?<Data\s+Name='TargetLogonId'>(?<TargetLogonId>[^<]*)<\/Data>.*?<Data\s+Name='LogonType'>(?<LogonType>[^<]*)<\/Data>.*?<Data\s+Name='EventIdx'>(?<EventIdx>[^<]*)<\/Data>.*?<Data\s+Name='EventCountTotal'>(?<EventCountTotal>[^<]*)<\/Data>.*?<Data\s+Name='GroupMembership'>(?<GroupMembership>.*?)<\/Data>.*?<\/EventData>.*?<\/Event> https://regex101.com/r/19eJtB/1  
Hello everyone, I'm facing a persistent issue with executing a script via a playbook in Splunk SOAR that uses WinRM. Here's the context: I've created a playbook that is supposed to isolate a host v... See more...
Hello everyone, I'm facing a persistent issue with executing a script via a playbook in Splunk SOAR that uses WinRM. Here's the context: I've created a playbook that is supposed to isolate a host via WinRM. The script works perfectly when I run it manually using the "Run Script" action from Splunk SOAR: the host gets isolated. However, when the same script is executed by the playbook, the execution is marked as "successful," but none of the expected outcomes occur: the host is not isolated. To be more precise: I added an elevation check in the script, which relaunches in administrator mode with -Verb RunAs if necessary. This works perfectly for the manual action. The script writes to a log file located in C:\Users\Public\Documents to avoid permission issues, but the log file is not created when executed by the playbook. I've tried other directories and even simplified the logic to just disable a network adapter with Disable-NetAdapter, but nothing seems to work. In summary, everything works fine when done manually, but not when automated via the playbook. I have the impression that there's a difference in context between manual execution and playbook execution that's causing the issue, perhaps related to permissions or WinRM session restrictions. Does anyone have any idea what might be preventing the playbook from executing this script correctly, or any suggestions for workarounds? I'm really running out of ideas and any help would be greatly appreciated. Thanks in advance!
Dear Splunk I second this motion, with a few additional points for your consideration: 1. Manual Downloads Are So 2005: Logging into your website, hunting down the download link, and wrestling with... See more...
Dear Splunk I second this motion, with a few additional points for your consideration: 1. Manual Downloads Are So 2005: Logging into your website, hunting down the download link, and wrestling with wget is the DevOps equivalent of using a fax machine. Cool for retro vibes, but not ideal for modern enterprises. 2. RPM/YUM Best Practices: Providing a proper repo isn't just about convenience; it's about consistency, reliability, and automation. Signed RPMs and authenticated repos have been standard for decades. Even Bob's Open Source Project has a repo, and he works out of his garage. 3. Competitor Comparison: Elastic, Datadog, and the rest of the cool kids already have yum and apt repos. Don’t you want to sit at the popular table? Or at least not the “legacy tools” table? 4. Risk Management: Yes, we know, "unattended updates are risky!" But this isn't our first rodeo. We manage critical systems daily and don't just blindly yum update prod boxes. Give us the tools, and we'll handle the responsibility. So, how about it, Splunk? Help us help you. We’ll even bake a cake for the 14th birthday of this request if that’s what it takes. Yours in perpetual hope, Another Disillusioned Admin
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But ... See more...
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But on my former laptop, which was a windows OS, I could find the "Local event log collection" option in the data input section.  Please how can I go about this?  
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFO... See more...
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFORMS-reduce_raw = reduce_event_raw transforms.conf [reduce_event_raw] REGEX = <Event[^>]*>.*?<System>.*?<Provider\s+Name='(?<ProviderName>[^']*)'\s+Guid='(?<ProviderGuid>[^']*)'.*?<EventID>(?<EventID>\d+)</EventID>.*?<Version>(?<Version>\d+)</Version>.*?<Level>(?<Level>\d+)</Level>.*?<Task>(?<Task>\d+)</Task>.*?<Opcode>(?<Opcode>\d+)</Opcode>.*?<Keywords>(?<Keywords>[^<]*)</Keywords>.*?<TimeCreated\s+SystemTime='(?<SystemTime>[^']*)'.*?<EventRecordID>(?<EventRecordID>\d+)</EventRecordID>.*?<Correlation\s+ActivityID='(?<ActivityID>[^']*)'.*?<Execution\s+ProcessID='(?<ProcessID>\d+)'\s+ThreadID='(?<ThreadID>\d+)'.*?<Channel>(?<Channel>[^<]*)</Channel>.*?<Computer>(?<Computer>[^<]*)</Computer>.*?<EventData>.*?<Data\s+Name='SubjectUserSid'>(?<SubjectUserSid>[^<]*)</Data>.*?<Data\s+Name='SubjectUserName'>(?<SubjectUserName>[^<]*)</Data>.*?<Data\s+Name='SubjectDomainName'>(?<SubjectDomainName>[^<]*)</Data>.*?<Data\s+Name='SubjectLogonId'>(?<SubjectLogonId>[^<]*)</Data>.*?<Data\s+Name='TargetUserSid'>(?<TargetUserSid>[^<]*)</Data>.*?<Data\s+Name='TargetUserName'>(?<TargetUserName>[^<]*)</Data>.*?<Data\s+Name='TargetDomainName'>(?<TargetDomainName>[^<]*)</Data>.*?<Data\s+Name='TargetLogonId'>(?<TargetLogonId>[^<]*)</Data>.*?<Data\s+Name='LogonType'>(?<LogonType>[^<]*)</Data>.*?<Data\s+Name='EventIdx'>(?<EventIdx>[^<]*)</Data>.*?<Data\s+Name='EventCountTotal'>(?<EventCountTotal>[^<]*)</Data>.*?<Data\s+Name='GroupMembership'>(?<GroupMembership>.*?)</Data>.*?</EventData>.*?</Event> FORMAT = ProviderName::$1 ProviderGuid::$2 EventID::$3 Version::$4 Level::$5 Task::$6 Opcode::$7 Keywords::$8 SystemTime::$9 EventRecordID::$10 ActivityID::$11 ProcessID::$12 ThreadID::$13 Channel::$14 Computer::$15 SubjectUserSid::$16 SubjectUserName::$17 SubjectDomainName::$18 SubjectLogonId::$19 TargetUserSid::$20 TargetUserName::$21 TargetDomainName::$22 TargetLogonId::$23 LogonType::$24 EventIdx::$25 EventCountTotal::$26 GroupMembership::$27 DEST_KEY = _raw Then I will be able to pick which bits from the raw data to be indexed It looks like the regex would not pick up on fields correctly There is the raw event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3bxxxxxx}'/><EventID>4627</EventID><Version>0</Version><Level>0</Level><Task>12554</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-11-27T11:27:45.6695363Z'/><EventRecordID>2177113</EventRecordID><Correlation ActivityID='{01491b93-40a4-0002-6926-4901a440db01}'/><Execution ProcessID='1196' ThreadID='1312'/><Channel>Security</Channel><Computer>Computer1</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-18</Data><Data Name='SubjectUserName'>CXXXXXX</Data><Data Name='SubjectDomainName'>CXXXXXXXX</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='TargetUserSid'>S-1-5-18</Data><Data Name='TargetUserName'>SYSTEM</Data><Data Name='TargetDomainName'>NT AUTHORITY</Data><Data Name='TargetLogonId'>0x3e7</Data><Data Name='LogonType'>5</Data><Data Name='EventIdx'>1</Data><Data Name='EventCountTotal'>1</Data><Data Name='GroupMembership'> %{S-1-5-32-544} %{S-1-1-0} %{S-1-5-11} %{S-1-16-16384}</Data></EventData></Event Any help t-shoot the problem will be highly valued. Thank you in advance!
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another ... See more...
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another case my folder has 47 files, but Splunk loaded only 3 files. I am having this issue even after trying multiple times while the count of files loaded successfully keeps changing. Kindly help me with the possible reasons of this happening. MMM 
Dear Splunk, It's me again, your 13-year-old feature request. I'm a teenager now, full of angst and unfulfilled dreams. You know, like being a real YUM repo instead of a pipe dream. Other software ... See more...
Dear Splunk, It's me again, your 13-year-old feature request. I'm a teenager now, full of angst and unfulfilled dreams. You know, like being a real YUM repo instead of a pipe dream. Other software out there—Elastic, Docker—they've got their act together. They're hanging out in proper package managers, getting auto-updated, living the easy DevOps life. Meanwhile, I'm stuck here on the outside, manually downloaded and prayed over like it's still 1999. Look, it's cool. I get it. Maybe you think I'm too risky. But come on, it's not like admins are out here setting YUM cron jobs willy-nilly for production servers. We’ve evolved, Splunk. We use staging environments. We test. Heck, we even read changelogs (sometimes). So, how about it? Let’s make 2025 the year you give me a proper repo. Signed artifacts, authenticated HTTPS access—the works. I promise I won’t embarrass you. And if things go wrong? RPM rollback has my back. Yours, A Dream Deferred (but still hopeful) 13-year-old feature request
It is not related with the splunk.secret as suggested in other replies. When creating the same users with same passwords in two different instances, it first generates a random salt. Then the salt i... See more...
It is not related with the splunk.secret as suggested in other replies. When creating the same users with same passwords in two different instances, it first generates a random salt. Then the salt is concatenated with the password and hashed. It is done this way to ensure the security (to prevent rainow tables). As the salt is randomly generated, both instances will have a random salt (between $6$ and $) and therefore a different hash (after the last mentioned $). When copying the passwd line to another instance, we are enforcing this new server to use the same salt and therefore the hash will be the same. In summary, you can both create a user in both servers or just create it in one of them and copy the passwd file to the other one. If this is helpful please give me karma
Hello,   Thank you for your help, that did t he trick.  Unfortunately, the only option I see is to bring them in as a list.  It appears VZEROP002 is always the first on the list.  So this should do... See more...
Hello,   Thank you for your help, that did t he trick.  Unfortunately, the only option I see is to bring them in as a list.  It appears VZEROP002 is always the first on the list.  So this should do the trick. Thanks again, Tom
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRA... See more...
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRANSFORMS-set = dns, external  and transforms.conf [dns] REGEX ="dstport=53" DEST_KEY = queue FORMAT = nullQueue [external] REGEX = "to specific external IP range" DEST_KEY = queue FORMAT = nullQueue So my HWF drops those events and the "rest" is ingested to the indexer. (on-prem). - so far so good... One of our operational teams requested that I ingest "their" logs to their Splunk Cloud instance. How I can technically do this?  1. I want to keep all the logs on the on-prem indexer with the filtering 2. I want to ingest events from a specific IP range to Splunk Cloud without filtering BR,  Norbert
Hi @bowesmana @PickleRick just for both of your information. When I replaced endpoint /services/collector/event?auto_extract_timestamp=true with /services/collector/raw?auto_extract_timestamp=true, ... See more...
Hi @bowesmana @PickleRick just for both of your information. When I replaced endpoint /services/collector/event?auto_extract_timestamp=true with /services/collector/raw?auto_extract_timestamp=true, correct raw data format started coming and the timestamp also started matching .  Example as below. Thanks both of your support and valuable suggestions.