All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Splunk I second this motion, with a few additional points for your consideration: 1. Manual Downloads Are So 2005: Logging into your website, hunting down the download link, and wrestling with... See more...
Dear Splunk I second this motion, with a few additional points for your consideration: 1. Manual Downloads Are So 2005: Logging into your website, hunting down the download link, and wrestling with wget is the DevOps equivalent of using a fax machine. Cool for retro vibes, but not ideal for modern enterprises. 2. RPM/YUM Best Practices: Providing a proper repo isn't just about convenience; it's about consistency, reliability, and automation. Signed RPMs and authenticated repos have been standard for decades. Even Bob's Open Source Project has a repo, and he works out of his garage. 3. Competitor Comparison: Elastic, Datadog, and the rest of the cool kids already have yum and apt repos. Don’t you want to sit at the popular table? Or at least not the “legacy tools” table? 4. Risk Management: Yes, we know, "unattended updates are risky!" But this isn't our first rodeo. We manage critical systems daily and don't just blindly yum update prod boxes. Give us the tools, and we'll handle the responsibility. So, how about it, Splunk? Help us help you. We’ll even bake a cake for the 14th birthday of this request if that’s what it takes. Yours in perpetual hope, Another Disillusioned Admin
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But ... See more...
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But on my former laptop, which was a windows OS, I could find the "Local event log collection" option in the data input section.  Please how can I go about this?  
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFO... See more...
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFORMS-reduce_raw = reduce_event_raw transforms.conf [reduce_event_raw] REGEX = <Event[^>]*>.*?<System>.*?<Provider\s+Name='(?<ProviderName>[^']*)'\s+Guid='(?<ProviderGuid>[^']*)'.*?<EventID>(?<EventID>\d+)</EventID>.*?<Version>(?<Version>\d+)</Version>.*?<Level>(?<Level>\d+)</Level>.*?<Task>(?<Task>\d+)</Task>.*?<Opcode>(?<Opcode>\d+)</Opcode>.*?<Keywords>(?<Keywords>[^<]*)</Keywords>.*?<TimeCreated\s+SystemTime='(?<SystemTime>[^']*)'.*?<EventRecordID>(?<EventRecordID>\d+)</EventRecordID>.*?<Correlation\s+ActivityID='(?<ActivityID>[^']*)'.*?<Execution\s+ProcessID='(?<ProcessID>\d+)'\s+ThreadID='(?<ThreadID>\d+)'.*?<Channel>(?<Channel>[^<]*)</Channel>.*?<Computer>(?<Computer>[^<]*)</Computer>.*?<EventData>.*?<Data\s+Name='SubjectUserSid'>(?<SubjectUserSid>[^<]*)</Data>.*?<Data\s+Name='SubjectUserName'>(?<SubjectUserName>[^<]*)</Data>.*?<Data\s+Name='SubjectDomainName'>(?<SubjectDomainName>[^<]*)</Data>.*?<Data\s+Name='SubjectLogonId'>(?<SubjectLogonId>[^<]*)</Data>.*?<Data\s+Name='TargetUserSid'>(?<TargetUserSid>[^<]*)</Data>.*?<Data\s+Name='TargetUserName'>(?<TargetUserName>[^<]*)</Data>.*?<Data\s+Name='TargetDomainName'>(?<TargetDomainName>[^<]*)</Data>.*?<Data\s+Name='TargetLogonId'>(?<TargetLogonId>[^<]*)</Data>.*?<Data\s+Name='LogonType'>(?<LogonType>[^<]*)</Data>.*?<Data\s+Name='EventIdx'>(?<EventIdx>[^<]*)</Data>.*?<Data\s+Name='EventCountTotal'>(?<EventCountTotal>[^<]*)</Data>.*?<Data\s+Name='GroupMembership'>(?<GroupMembership>.*?)</Data>.*?</EventData>.*?</Event> FORMAT = ProviderName::$1 ProviderGuid::$2 EventID::$3 Version::$4 Level::$5 Task::$6 Opcode::$7 Keywords::$8 SystemTime::$9 EventRecordID::$10 ActivityID::$11 ProcessID::$12 ThreadID::$13 Channel::$14 Computer::$15 SubjectUserSid::$16 SubjectUserName::$17 SubjectDomainName::$18 SubjectLogonId::$19 TargetUserSid::$20 TargetUserName::$21 TargetDomainName::$22 TargetLogonId::$23 LogonType::$24 EventIdx::$25 EventCountTotal::$26 GroupMembership::$27 DEST_KEY = _raw Then I will be able to pick which bits from the raw data to be indexed It looks like the regex would not pick up on fields correctly There is the raw event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3bxxxxxx}'/><EventID>4627</EventID><Version>0</Version><Level>0</Level><Task>12554</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-11-27T11:27:45.6695363Z'/><EventRecordID>2177113</EventRecordID><Correlation ActivityID='{01491b93-40a4-0002-6926-4901a440db01}'/><Execution ProcessID='1196' ThreadID='1312'/><Channel>Security</Channel><Computer>Computer1</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-18</Data><Data Name='SubjectUserName'>CXXXXXX</Data><Data Name='SubjectDomainName'>CXXXXXXXX</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='TargetUserSid'>S-1-5-18</Data><Data Name='TargetUserName'>SYSTEM</Data><Data Name='TargetDomainName'>NT AUTHORITY</Data><Data Name='TargetLogonId'>0x3e7</Data><Data Name='LogonType'>5</Data><Data Name='EventIdx'>1</Data><Data Name='EventCountTotal'>1</Data><Data Name='GroupMembership'> %{S-1-5-32-544} %{S-1-1-0} %{S-1-5-11} %{S-1-16-16384}</Data></EventData></Event Any help t-shoot the problem will be highly valued. Thank you in advance!
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another ... See more...
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another case my folder has 47 files, but Splunk loaded only 3 files. I am having this issue even after trying multiple times while the count of files loaded successfully keeps changing. Kindly help me with the possible reasons of this happening. MMM 
Dear Splunk, It's me again, your 13-year-old feature request. I'm a teenager now, full of angst and unfulfilled dreams. You know, like being a real YUM repo instead of a pipe dream. Other software ... See more...
Dear Splunk, It's me again, your 13-year-old feature request. I'm a teenager now, full of angst and unfulfilled dreams. You know, like being a real YUM repo instead of a pipe dream. Other software out there—Elastic, Docker—they've got their act together. They're hanging out in proper package managers, getting auto-updated, living the easy DevOps life. Meanwhile, I'm stuck here on the outside, manually downloaded and prayed over like it's still 1999. Look, it's cool. I get it. Maybe you think I'm too risky. But come on, it's not like admins are out here setting YUM cron jobs willy-nilly for production servers. We’ve evolved, Splunk. We use staging environments. We test. Heck, we even read changelogs (sometimes). So, how about it? Let’s make 2025 the year you give me a proper repo. Signed artifacts, authenticated HTTPS access—the works. I promise I won’t embarrass you. And if things go wrong? RPM rollback has my back. Yours, A Dream Deferred (but still hopeful) 13-year-old feature request
It is not related with the splunk.secret as suggested in other replies. When creating the same users with same passwords in two different instances, it first generates a random salt. Then the salt i... See more...
It is not related with the splunk.secret as suggested in other replies. When creating the same users with same passwords in two different instances, it first generates a random salt. Then the salt is concatenated with the password and hashed. It is done this way to ensure the security (to prevent rainow tables). As the salt is randomly generated, both instances will have a random salt (between $6$ and $) and therefore a different hash (after the last mentioned $). When copying the passwd line to another instance, we are enforcing this new server to use the same salt and therefore the hash will be the same. In summary, you can both create a user in both servers or just create it in one of them and copy the passwd file to the other one. If this is helpful please give me karma
Hello,   Thank you for your help, that did t he trick.  Unfortunately, the only option I see is to bring them in as a list.  It appears VZEROP002 is always the first on the list.  So this should do... See more...
Hello,   Thank you for your help, that did t he trick.  Unfortunately, the only option I see is to bring them in as a list.  It appears VZEROP002 is always the first on the list.  So this should do the trick. Thanks again, Tom
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRA... See more...
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRANSFORMS-set = dns, external  and transforms.conf [dns] REGEX ="dstport=53" DEST_KEY = queue FORMAT = nullQueue [external] REGEX = "to specific external IP range" DEST_KEY = queue FORMAT = nullQueue So my HWF drops those events and the "rest" is ingested to the indexer. (on-prem). - so far so good... One of our operational teams requested that I ingest "their" logs to their Splunk Cloud instance. How I can technically do this?  1. I want to keep all the logs on the on-prem indexer with the filtering 2. I want to ingest events from a specific IP range to Splunk Cloud without filtering BR,  Norbert
Hi @bowesmana @PickleRick just for both of your information. When I replaced endpoint /services/collector/event?auto_extract_timestamp=true with /services/collector/raw?auto_extract_timestamp=true, ... See more...
Hi @bowesmana @PickleRick just for both of your information. When I replaced endpoint /services/collector/event?auto_extract_timestamp=true with /services/collector/raw?auto_extract_timestamp=true, correct raw data format started coming and the timestamp also started matching .  Example as below. Thanks both of your support and valuable suggestions.  
@PickleRick , Okay thanks for your answer, i did check both "| rest /data/indexes/myindex" and btool as you mentioned and both have maxTotalDataSizeMB to 5000 (5GB). I can't check through the GUI "S... See more...
@PickleRick , Okay thanks for your answer, i did check both "| rest /data/indexes/myindex" and btool as you mentioned and both have maxTotalDataSizeMB to 5000 (5GB). I can't check through the GUI "Settings->Indexes" but i guess it's not that important.
Hi @santhipriya , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
One more important things to check: splunk btool indexes list --debug This will give you an overview of the settings which are applied to your indexes along with where they are defined. Make sure y... See more...
One more important things to check: splunk btool indexes list --debug This will give you an overview of the settings which are applied to your indexes along with where they are defined. Make sure your settings are defined in proper places https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
Yes. That's a valid point. That's just one of the specific cases of my general remarks of mixing the same space both as volume-based definition and direct directory "pointer". Theroretically, you co... See more...
Yes. That's a valid point. That's just one of the specific cases of my general remarks of mixing the same space both as volume-based definition and direct directory "pointer". Theroretically, you could use $SPLUNK_DB as your volume location but: 1.  There are some default indexes which write there (like all the _internal and other underscore indexes) and you'll have to make sure to relocate/redefine all of them, which might be tricky to keep synced with new software releases which might introduce new indexes (like _configtracker). 2. $SPLUNK_DB does not contain just indexes but also - for example - kvstore contents (and its backups).
Well, the OP explicitly said that they verified the macro, the tags and so on. So while the symptoms were similar (couldn't find the datamodel), the reason would probably have been different. Just p... See more...
Well, the OP explicitly said that they verified the macro, the tags and so on. So while the symptoms were similar (couldn't find the datamodel), the reason would probably have been different. Just pointing this out so that we avoid confusion and people can benefit from finding the right answer for their problem in the future
Hi @isoutamo , Thanks for your input, but that's not the issue there, i already did clean my saturated index and restarted the indexer and it works fine now. And as I said to @richgalloway , in my p... See more...
Hi @isoutamo , Thanks for your input, but that's not the issue there, i already did clean my saturated index and restarted the indexer and it works fine now. And as I said to @richgalloway , in my post i stated that only one of my indexes was taking way more space than it should and i know which one. The issue is why it did exceed the maxTotalDataSizeMB set in the indexes.conf ? Just adding more space might not be the right solution for us, but i keep in mind the whole thing around using volumes for a better planning of the data storage, thanks.
Close. But not complete.   index=* [| inputlookup numbers.csv | rename number as search | table search | format ] Without the final format command Splunk will use only first row of the subsearch r... See more...
Close. But not complete.   index=* [| inputlookup numbers.csv | rename number as search | table search | format ] Without the final format command Splunk will use only first row of the subsearch results as a condition. So it will only look for the first value from the lookup.  
@richgalloway ,  Maybe my post was not clear enough sorry, i did state that one of my index on the partition (and i already know which one, the one i gave in the indexes.conf) is saturated with warm... See more...
@richgalloway ,  Maybe my post was not clear enough sorry, i did state that one of my index on the partition (and i already know which one, the one i gave in the indexes.conf) is saturated with warm buckets (db_*) and taking all the space available, even though it's configurate as shown in the indexes.conf. Of course multiple indexes are using the disk, but only one went highly above the maxTotalDataSizeMB and saturated it.
Hi @bowesmana, Events are not showing as expected after selecting "show source".  
Have you tried to stop Splunk, removing the mongod.lock file and then start Splunk again?
Hi @Naa_Win , let me understand: you want to send data from abc servers to new index and all the others to the old one, is it correct? you could try something like this: monitor:///usr/local/apps... See more...
Hi @Naa_Win , let me understand: you want to send data from abc servers to new index and all the others to the old one, is it correct? you could try something like this: monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 blacklist1 = /usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log blacklist2 = /usr/local/apps/logs/*/base_log/*/*/*abc*/*.log monitor:///usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 monitor:///usr/local/apps/logs/*/base_log/*/*/*abc*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 Ciao. Giuseppe