All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am curious about this, could you say which TA is trying to initialize the modular input even if the inputs.conf stanzas are disabled?
In my case i need to search in textbox with dynamic values from message field not with predefined values. Dynamic doesn't mean it should be free text.  This next example gives you two inputs, on... See more...
In my case i need to search in textbox with dynamic values from message field not with predefined values. Dynamic doesn't mean it should be free text.  This next example gives you two inputs, one a truly dynamic, multiselect, the other a free text if you absolutely want to go that route.   <form version="1.1"> <label>Multivalue input</label> <description>https://community.splunk.com/t5/Splunk-Search/How-to-filter-events-using-text-box-values/m-p/704698</description> <fieldset submitButton="false"> <input type="multiselect" token="multiselect_tok" searchWhenChanged="true"> <label>select all applicable</label> <choice value="*">All</choice> <initialValue>*</initialValue> <fieldForLabel>log_level</fieldForLabel> <fieldForValue>log_level</fieldForValue> <search> <query>index = _internal log_level = * | stats count by log_level</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="text" token="multivalue_text_tok" searchWhenChanged="true"> <label>enter comma separated</label> <default>*</default> </input> </fieldset> <row> <panel> <event> <title>Using &gt;$multiselect_tok$&lt;</title> <search> <query>index = _internal log_level IN ($multiselect_tok$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> <panel> <event> <title>Using &gt;$multivalue_text_tok$&lt;</title> <search> <query>index = _internal [| makeresults | fields - _time | eval log_level = upper(trim(split("$multivalue_text_tok$", ",")))]</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>   The problem with free text is that people make far more mistakes than machines do.  My code tries to cope with that as much as possible.  But unless you have a use case that uses free text in a meaningful way, forget comma delimited input.
Is there any solution ...
1. 2. Honestly, that's surprising. Normally the events are ingested as either WinEventLog or XmlWinEventLog. See https://docs.splunk.com/Documentation/AddOns/released/Windows/SourcetypesandCIMdat... See more...
1. 2. Honestly, that's surprising. Normally the events are ingested as either WinEventLog or XmlWinEventLog. See https://docs.splunk.com/Documentation/AddOns/released/Windows/SourcetypesandCIMdatamodelinfo The naming where you used the channel name in the sourcetype was used in old versions of TA_windows as far as I know. But for ages now it's deprecated and TA_windows does a rewrite to the normalized version. Anyway, there is one more thing worth taking into consideration - You're rewriting your event data into a completely different format. So the normal TA_windows extractions won't work. You might recast the events into another sourcetype but then you'd have to adjust all CIM-mappings and such to make this sourcetype properly working with stuff like ES. Honestly, I'd go for preprocessing this with some external tool before ingestion and try to retain the original format while cutting "unnecessary" data.
Hi @PickleRick, Thank you for the clarification and yes you are correct I am addressing the same issue. Here's the updated response that reflects the correct sequence of events: 1. Component Place... See more...
Hi @PickleRick, Thank you for the clarification and yes you are correct I am addressing the same issue. Here's the updated response that reflects the correct sequence of events: 1. Component Placement The Universal Forwarder (UF) is responsible only for collecting and forwarding data and does not perform parsing or transformations. SEDCMD settings in props.conf must therefore be applied on the indexers, where parsing occurs. Since there are no Heavy Forwarders in the architecture, the indexers were the correct location for these configurations. 2. Stanza Naming and Testing I confirm that the XmlWinEventLog: Security stanza was the correct choice for this configuration. Each SEDCMD was tested separately in this stanza: The first SEDCMD partially worked, applying some transformations but not entirely meeting the expected output. The second SEDCMD, tested independently, caused Event ID 4627 to stop being indexed altogether. These results confirm that XmlWinEventLog: Security is the appropriate naming convention, as the configuration was correctly recognised and applied. Additionally, I tested other stanzas, including WinEventLog: Security, and none worked as intended, further validating that XmlWinEventLog: Security is the correct stanza to use 3. Configuration Location For quick validation during testing, the configurations were initially placed in system/local. For production deployment, they have been moved into dedicated apps, ensuring better organisation, ease of updates, and compliance with Splunk’s best practices. 4. Regex Validation Both SEDCMD regex directives were validated using | makeresults with the raw event data. The partial success of the first and the indexing failure of the second highlight that the regex logic itself or environmental factors need adjustment for consistent application in production I hope this clears up any concerns and confirms the steps taken during testing and deployment. Let me know if there’s anything else you’d like me to elaborate to be able to resolve the issue Best regards, Dan
Every solution based on CLONE_SOURCETYPE quickly gets ugly because CLONE_SOURCETYPE is not discrimintative so you have to not only process both event streams duplicating your definitions for particul... See more...
Every solution based on CLONE_SOURCETYPE quickly gets ugly because CLONE_SOURCETYPE is not discrimintative so you have to not only process both event streams duplicating your definitions for particular sourcetype so the index-time settings are applied also for the new sourcetype you also have to rewrite the sourcetype at the end to the old one. And have to filter both streams to only work on subsets of the events. Very, very ugly and quickly gets unmaintainable. And if you by any chance manage to make a loop, you'll crash your splunkd.
Hi @danielbb, As an alternative in Simple XML, you can stack visualizations vertically by including more than one visualization in a panel:   <row> <!-- first row --> <panel> </panel> </row>... See more...
Hi @danielbb, As an alternative in Simple XML, you can stack visualizations vertically by including more than one visualization in a panel:   <row> <!-- first row --> <panel> </panel> </row> <row> <panel> <!-- second row, first column --> </panel> <panel> <chart> <!-- second row, second column, first row --> </chart> <chart> <!-- second row, second column, second row --> </chart> <chart> <!-- second row, second column, third row --> </chart> </panel </row>   Some visualizations may require an empty spacer, e.g. an <html/> section, to defeat the auto-layout code. Here's an extended example using an inverse normal macro (not shown) to generate a bit of random data:   <dashboard version="1.1" theme="light"> <label>danielbb_layout</label> <search id="base"> <query>| gentimes start=11/29/2024:00:00:00 end=11/30/2024:00:00:00 increment=1m | eval _time=starttime | fields + _time | sort 0 - _time | eval x=`norminv("`rand()`*(0.9999999999999999-0.0000000000000001)+0.0000000000000001", 1.3, 10)`</query> </search> <search id="stats" base="base"> <query>| stats avg(x) as u stdev(x) as s</query> </search> <row> <panel> <html> <h1>Some Random Data</h1> </html> </panel> </row> <row> <panel> <chart> <title>Histogram</title> <search base="base"> <query>| chart count over x span=0.5</query> </search> <option name="charting.legend.placement">none</option> <option name="height">600</option> </chart> </panel> <panel> <chart> <title>Samples</title> <search base="base"></search> <option name="charting.legend.placement">none</option> </chart> <single> <title>Mean</title> <search base="stats"> <query>| fields u</query> </search> <option name="numberPrecision">0.000</option> </single> <html/> <single> <title>Standard Deviation</title> <search base="stats"> <query>| fields s</query> </search> <option name="numberPrecision">0.000</option> </single> </panel> </row> </dashboard>  
I suppose it's another attempt at reducing the size of the logs while maintaining the events as such but cutting the unnecessary parts from them.
In addition to what @PickleRick wrote, what problem are you trying to solve?  Splunk is quite capable of parsing XML logs so why are trying to re-format them?  Why not use a transform to extract the ... See more...
In addition to what @PickleRick wrote, what problem are you trying to solve?  Splunk is quite capable of parsing XML logs so why are trying to re-format them?  Why not use a transform to extract the fields directly instead of the interim SEDCMD step?
We don't know the whole picture but from the partial info I can guest that - assuming the regex and the substitution pattern are OK, there are two obvious things which might be wrong. 1) You put you... See more...
We don't know the whole picture but from the partial info I can guest that - assuming the regex and the substitution pattern are OK, there are two obvious things which might be wrong. 1) You put your settings on a wrong Splunk component (i.e. you're trying to put them on the indexer when your data is going through a HF earlier) and/or 2) You're binding the SEDCMD in a wrong stanza. I see that you're using XmlWinEventLog: Security - this is the long gone naming convention and it hasn't been in use for several years now. Now all windows events are of sourcetype XmlWinEventLog and the source field differentiates between the originating event log. As a side note - it's a good practice to avoid writing to system/local - use apps to group your settings so that you can later easily manage them, overwrite, inherit and so on.
Hi @kmahanta_17  1.  First you need to be certified power user  2. for exam preparation        following courses are recommended but not mandatory         Splunk System Administration       ... See more...
Hi @kmahanta_17  1.  First you need to be certified power user  2. for exam preparation        following courses are recommended but not mandatory         Splunk System Administration         Splunk Data Administration          you can refer to documentation for above courses if you are not able to attend courses  https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-certified-admin-track.pdf  Blue Print for exam  https://www.splunk.com/en_us/pdfs/training/splunk-test-blueprint-enterprise-admin.pdf 
Hi Folks, Can anyone suggest or help me out on how to get prep for Splunk administration certification course and which certification is good in that case? Regards, Kanchan
Hi community, The following mod=sed regex works as expected, but when I attempted on the system/local/props.conf on the indexers it fails to trim as tested via | make results | makeresults | eva... See more...
Hi community, The following mod=sed regex works as expected, but when I attempted on the system/local/props.conf on the indexers it fails to trim as tested via | make results | makeresults | eval _raw="<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3bxxxxxx}'/><EventID>4627</EventID><Version>0</Version><Level>0</Level><Task>12554</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-11-27T11:27:45.6695363Z'/><EventRecordID>2177113</EventRecordID><Correlation ActivityID='{01491b93-40a4-0002-6926-4901a440db01}'/><Execution ProcessID='1196' ThreadID='1312'/><Channel>Security</Channel><Computer>Computer1</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-18</Data><Data Name='SubjectUserName'>CXXXXXX</Data><Data Name='SubjectDomainName'>CXXXXXXXX</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='TargetUserSid'>S-1-5-18</Data><Data Name='TargetUserName'>SYSTEM</Data><Data Name='TargetDomainName'>NT AUTHORITY</Data><Data Name='TargetLogonId'>0x3e7</Data><Data Name='LogonType'>5</Data><Data Name='EventIdx'>1</Data><Data Name='EventCountTotal'>1</Data><Data Name='GroupMembership'> %{S-1-5-32-544} %{S-1-1-0} %{S-1-5-11} %{S-1-16-16384}</Data></EventData></Event>" | rex mode=sed "s/(?s).*<Event[^>]*>.*?<EventID>4627<\/EventID>.*?<TimeCreated SystemTime='([^']*)'.*?<Computer>([^<]*)<\/Computer>.*?<Data Name='SubjectUserName'>([^<]*)<\/Data>.*?<Data Name='SubjectDomainName'>([^<]*)<\/Data>.*?<Data Name='TargetUserName'>([^<]*)<\/Data>.*?<Data Name='TargetDomainName'>([^<]*)<\/Data>.*?<Data Name='LogonType'>([^<]*)<\/Data>.*?<\/Event>.*/EventID:4627 TimeCreated:\\1 Computer:\\2 SubjectUserName:\\3 SubjectDomainName:\\4 TargetUserName:\\5 TargetDomainName:\\6 LogonType:\\7/g" ---------------------------------- [XmlWinEventLog: Security] SEDCMD-reduce_4627 = s/(?s).*<Event[^>]*>.*?<EventID>4627<\/EventID>.*?<TimeCreated SystemTime='([^']*)'.*?<Computer>([^<]*)<\/Computer>.*?<Data Name='SubjectUserName'>([^<]*)<\/Data>.*?<Data Name='SubjectDomainName'>([^<]*)<\/Data>.*?<Data Name='TargetUserName'>([^<]*)<\/Data>.*?<Data Name='TargetDomainName'>([^<]*)<\/Data>.*?<Data Name='LogonType'>([^<]*)<\/Data>.*?<\/Event>.*/EventID:4627 TimeCreated:\1 Computer:\2 SubjectUserName:\3 SubjectDomainName:\4 TargetUserName:\5 TargetDomainName:\6 LogonType:\7/g Can anyone help me identify where the problem is, please? Thank you.
Hi @danielbb , as also @richgalloway said, there are more parameters that you have to consider: data volume, HA or not HA, number of users and scheduled searches, etc... My first hint is to engage... See more...
Hi @danielbb , as also @richgalloway said, there are more parameters that you have to consider: data volume, HA or not HA, number of users and scheduled searches, etc... My first hint is to engage a Splunk Certified Architect or a Splunk Professional Services to design your architecture. You could find some ideas at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf E.g.: having only one Indexer, there's no requirements for a Cluster Manager and you can put the License manager on the same Indexer; the Cluster manager is required if you have HA requirements and you have at least two Indexers. About the HF, it depends on many factors: where are located your Meraki servers, on premise or in Cloud? if on-premise it's a best practice to have a concentrator between devices and Indexers, anyway, you could also put (it isn't a best practice) the syslog receiver on the Indexers. Then how Meraki sends logs? if by syslog, you should configure an rsyslog server or SC4S on a dedicated server. As I said, I hint to engare a Splunk Certified Architect. Ciao. Giuseppe
Hi @inessa40408 , Splunk is a search engine, and it takes the available logs. What's the technology you're using to take these logs? maybe the solution is in the integration between your solution a... See more...
Hi @inessa40408 , Splunk is a search engine, and it takes the available logs. What's the technology you're using to take these logs? maybe the solution is in the integration between your solution and Splunk. Ciao. Giuseppe
Hi @Rak , surely append and stats is better than join, but anyway, I hint to analyze my approach and try to use it because is faster and it hasn't the limit of 50,000 results in the subsearch. Cia... See more...
Hi @Rak , surely append and stats is better than join, but anyway, I hint to analyze my approach and try to use it because is faster and it hasn't the limit of 50,000 results in the subsearch. Ciao. Giuseppe
Does it work you run the script using the debugger, but uncheck the checkbox that says "Run as current user"? Also if I understand correctly, you are not using the SOAR winRM (Windows Remote Managem... See more...
Does it work you run the script using the debugger, but uncheck the checkbox that says "Run as current user"? Also if I understand correctly, you are not using the SOAR winRM (Windows Remote Management) app, but you are instead using a different app to trigger a script, or using a custom function that implements WinRM communication?
Probably the cleanest way to do this is as @tscroggins suggested: Make a browser extension that changes the interface. Another workaround would be to ignore the app drop-down menu completely and ins... See more...
Probably the cleanest way to do this is as @tscroggins suggested: Make a browser extension that changes the interface. Another workaround would be to ignore the app drop-down menu completely and instead make a navigation menu in your apps which has links to the various app versions and supports multi-drop-down. It may be cumbersome to maintain but it will look better.
If I understand correctly, you have a windows system with a logfile that does not have a long log retention time, so you cannot use the log file to look back very far, but you need to be able to look... See more...
If I understand correctly, you have a windows system with a logfile that does not have a long log retention time, so you cannot use the log file to look back very far, but you need to be able to look further back in time. This sounds like a straightforward use case for the Splunk forwarder. If you install the forwarder on the machine, then set up an input configuration to monitor that logfile, then the forwarder will send the log data to the Splunk indexers where it will be indexed and stored for longer times.
I don't see why not. From Splunk's perspective it's just a remote storage regardless of where the indexers are. As long as you have network connectivity you should be fine. I suppose it might cost y... See more...
I don't see why not. From Splunk's perspective it's just a remote storage regardless of where the indexers are. As long as you have network connectivity you should be fine. I suppose it might cost you more if you have inter-region traffic instead if intra-region one but I'm not that good on AWS pricing to say somethig definite here. And it's an issue completely outside of Splunk.