All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And did you check what sudo told you? Does your sudo work at all?
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= ... See more...
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= max_time This works for my use case, bit clunkly. Thank all. 
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sen... See more...
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sending data to indexers. So there are several different use cases. As a rule of thumb it's best to use DS to distribute apps to forwarders regardless of what kind of forwarders they are. There are some caveats though. Most importantly, many modular inputs require interactive configuration from webui. And those can create configuration items which might: 1) Hold sensitive data like authorization info for external services 2) Be encrypted in a way that is not transferable between forwarders. So you might end up in a situation where you might not want to distribute a particular app and its settings centrally. As for me, assuming you can do that (because either the above-stated points do not apply or are of no concern) I'd deploy an app like that on a testing rig, create configuration for a given input, capture the resulting conf files and add them to an app pushed from DS to the production environment.
@pipehitter- Please accept my answer if it helped you with your question by clicking on "Accept as Solution".
Your timepicker will not work. Timepicker is responsible for setting the earliest/latest parameters for the search. Those parameters only affect fetching events from indexes at the beginning of the ... See more...
Your timepicker will not work. Timepicker is responsible for setting the earliest/latest parameters for the search. Those parameters only affect fetching events from indexes at the beginning of the search pipeline when the events are generated with search or tstats (maybe there's another command which they affect but I cannot think of any right now). They don't "filter" the events anywhere after that. Most importantly, if you're doing inputlookup or rest timepicker will not affect your search results in any way. And you can't do anything about it (maybe except some very very ugly bending over backwards with addinfo and filtering with where but that's not something any sane person would do.
Yes, packaging your content into an app is a good practice but it shouldn't matter much if it's in apps/<app>/local or system/local for actually running the config (unless the settings get overwritte... See more...
Yes, packaging your content into an app is a good practice but it shouldn't matter much if it's in apps/<app>/local or system/local for actually running the config (unless the settings get overwritten of course). And no, timestamp doesn't have to be in US format. That's what time parsing sourcetype settings are for. But back to the @joewetzel63 's issue - did you try running the script "as Splunk"? With splunk cmd /opt/splunkforwarder/bin/scripts/whatever.sh
Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solut... See more...
Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solution should be relatively simple <some restriction on index(es)> sourcetype IN (sourcetype1, sourcetype2) | stats values(colA) as colA values(colB) as colB values(col1) as col1 values(col2) as col2 [...] by common_column If you want all columns, you might simply go with values(*) as *
Hi @Sankar , do you want to dispay urgency of each search or to filter results by urgency? in the first case: index=notable | stats values(urgency) As urgency count BY search_name in the second c... See more...
Hi @Sankar , do you want to dispay urgency of each search or to filter results by urgency? in the first case: index=notable | stats values(urgency) As urgency count BY search_name in the second case (to have only notable with urgency=high): index=notable urgency=high | stats count BY search_name let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant num... See more...
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant number (5 K to 10 k) than other 2 SH Member.
check mongod.log under $SPLUNK_HOME/var/log/splunk/
Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US ... See more...
Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US format. Follow these instructions : https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/scriptedinputsexample/
Ah ok.  I changed the definitiion to below. Its still not working, time picker is ignoring the time. Anything else I should do?  
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required... See more...
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required, ensure those changes are synced back to the DS (etc/deployment-apps/<app_name>/local) to prevent overwrites. Alternatively, use 'excludeFromUpdate' in serverclass.conf to protect specific files or directories. For better scalability, avoid making direct changes on HFs and manage all configurations via the DS whenever possible."
Please don't tag uninvolved users.  If someone has an answer, they'll respond.
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the valu... See more...
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the values in a index. Ciao. Giuseppe
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(... See more...
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(_time, "%Y-%m-%d %H:%M:%S") start_date is YYYY-MM-DD, when I modify the _time, I can see it is changed via readable_time, but the timepicker still ignores the change. I can say search last 30 days and I get the events with _time before the range in the timepicker. Any ideas?  Thanks!
@kiran_panchavat , @PickleRick , @ITWhisperer , @isoutamo , @bowesmana 
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as diffe... See more...
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as different color columns?   here is my current search which yields a column chart with two columns of the same color index="sampleindex" source="samplesource" | search test_name="IR Test" | search serial_number="TC-7"| spath result | stats count by result  
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup... See more...
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup is running on a Splunk Heavy Forwarder (HF), which receives logs on port 9997 from a syslog collector that gathers data from various sources. With the ingest actions feature, I'm limited to setting up a single sourcetype (possibly "syslog") and writing rules to filter and direct data to different indexes based on the device type. However, I also want to separate the data based on sourcetypes. I'm currently stuck on how to achieve this. Has anyone tried a similar solution or have any advice?
@dural_yyz, @amahoski , @gcusello