All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Joshua2 I won't judge the solution design, but the intended use of the Universal Forwarder is to forward logs, not store them locally for later manual transfer.  You can do this with the UF by s... See more...
Hi @Joshua2 I won't judge the solution design, but the intended use of the Universal Forwarder is to forward logs, not store them locally for later manual transfer.  You can do this with the UF by setting up local indexing on each machine, however you would have to pay for license usage as the data is indexed at the UF tier, and then again pay for license usage when it is transferred and indexed to your Splunk Enterprise instance later. So you'd be paying twice to index the same data.  Also note there are performance implications for local indexing, and there are very limited parsing options on the UF, so you'd need to set up parsing later at the indexer anyway.  If you're ok with that option, you can do it by setting the indexAndForward setting in ouptuts.conf: [tcpout] defaultGroup = local_indexing [tcpout:local_indexing] indexAndForward = true   A better option to store the logs locally would be to use a third party log collection tool like Fluentd or LogStash, or write your own Powershell scripts. Ideally you would use Splunk for its intended purpose by directly forwarding the logs from the 60 UFs to a Splunk indexer (or HF), however I understand that may not be possible in this case. 
A warm bucket will not be evicted if it is too new on the premise that new data is more likely to be searched than old data.  "new" is defined by hotlist_recency_secs  and hotlist_bloom_filter_recenc... See more...
A warm bucket will not be evicted if it is too new on the premise that new data is more likely to be searched than old data.  "new" is defined by hotlist_recency_secs  and hotlist_bloom_filter_recency_hours  in indexes.conf. Urgent mode eviction comes into play when there are not enough files eligible for normal eviction.  In urgent mode, the hotlist_recency_secs and hotlist_bloom_filter_recency_hours settings are ignored.
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the cli... See more...
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the clients (3 total). Whereas UK panel shows the identical thing, but only the 2 clients 
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR... See more...
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR | snowincidentalert  
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be tra... See more...
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be transfered to a USB for archiving and indexed into splunk for NIST800 compliance eg login success/failure. I am struggling to find the correct syntax for the UF to save locally as it asks for a host and Port.   josh
This can be quite challenging, because if you then remove one of the sourcetypes from the auto-populated multiselect then add a new index selection it has to do some fiddly calculations to find out w... See more...
This can be quite challenging, because if you then remove one of the sourcetypes from the auto-populated multiselect then add a new index selection it has to do some fiddly calculations to find out which of the indexes has changed and then only add the new sourcetypes to the current  sourcetype selection. It can be done,  but it's quite a messy thing to play with given all the possible permutations of what you can do. Here's an example of simply adding sourcetypes to the auto select - hopefully it will give you some pointers. I've put a bunch of search panels that can show you what's going on. <form version="1.1" theme="light"> <label>AutoSelectMulti</label> <init> <set token="pre_indexes"></set> </init> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="multiselect" token="tok_indexes" searchWhenChanged="true"> <label>Indexes</label> <prefix>index IN (</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search base="base_defs"> <query/> </search> </input> <input type="multiselect" token="tok_sourcetype" searchWhenChanged="true"> <label>Sourcetypes</label> <prefix>sourcetype IN (</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <search base="base_defs"> <query> | stats count by sourcetype </query> </search> <fieldForLabel>sourcetype</fieldForLabel> <fieldForValue>sourcetype</fieldForValue> </input> </panel> <panel> <table> <search id="base_defs"> <query>| tstats values(sourcetype) as sourcetype count where index=* OR index=_* by index</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">2</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <table> <search base="base_defs"> <done> <eval token="form.tok_sourcetype">$result.form_tok_sourcetype$</eval> <set token="tok_sourcetype">$result.tok_sourcetype$</set> </done> <query>| search $tok_indexes$ | stats values(sourcetype) as sourcetype | eval form_tok_sourcetype=sourcetype | eval tok_sourcetype="sourcetype in (\"".mvjoin(sourcetype, "\",\"")."\")"</query> </search> <option name="count">2</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Hi @Viral_G, Try adding "align": "center" in the columnFormat tag of your tables, e.g. "columnFormat": { "identity": { "data": "> table | seriesByName(\"identity\") | formatByType(identityC... See more...
Hi @Viral_G, Try adding "align": "center" in the columnFormat tag of your tables, e.g. "columnFormat": { "identity": { "data": "> table | seriesByName(\"identity\") | formatByType(identityColumnFormatEditorConfig)", "align": "center" }, "usernames": { "data": "> table | seriesByName(\"usernames\") | formatByType(usernamesColumnFormatEditorConfig)", "align": "center" } }  
Hi @FeatureCreeep try setting the 'param.is_use_event_time' in alert_actions.conf as discussed in this doc: https://lantern.splunk.com/Observability/Product_Tips/IT_Service_Intelligence/Configuring_... See more...
Hi @FeatureCreeep try setting the 'param.is_use_event_time' in alert_actions.conf as discussed in this doc: https://lantern.splunk.com/Observability/Product_Tips/IT_Service_Intelligence/Configuring_notable_event_timestamps_to_match_raw_data 
Hi @arunsoni, For this data you can break the events before each timestamp appears. Splunk probably won't be able to handle the JSON field extraction because of the preamble line, so just extract... See more...
Hi @arunsoni, For this data you can break the events before each timestamp appears. Splunk probably won't be able to handle the JSON field extraction because of the preamble line, so just extract the fields at search time. props.conf : SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ MAX_TIMESTAMP_LOOKAHEAD = 25   search-time field extraction regex: (?<abrMode>[^"]+)"abrMode":"(?<abrMode>[^"]+)","abrProto":"(?<abrProto>[^"]+)","event":"(?<event>[^"]+)","sUrlMap":"(?<sUrlMap>[^"]+)","sc":\{"Host":"(?<Host>[^"]+)","OriginMedia":"(?<OriginMedia>[^"]+)","URL":"(?<URL>[^"]+)"\},"sm":\{"ActiveReqs":(?<ActiveReqs>\d+),"ActiveSecs":(?<ActiveSecs>\d+),"AliveSecs":(?<AliveSecs>\d+),"MediaSecs":(?<MediaSecs>\d+),"SpanReqs":(?<SpanReqs>\d+),"SpanSecs":(?<SpanSecs>\d+)\},"swnId":"(?<swnId>[^"]+)","wflow":"(?<wflow>[^"]+)"
So this issue was being caused by an fapolicyd deny all rule. Once I moved the rule out of the /etc/fapolicyd/rules.d, it let me upgrade Splunk.
Thank you, after my investigation of the problem, this is a super sparse search. I need to add IOPS to solve this problem. I raised the IOPS to 25,000. The search speed has changed amazing. It's done!
Thank you for your reply. I have 1 billion incidents every day ingesting the SPLUNK indexer. I checked the monitoring console. I didn't seem to see any abnormalities.
Thank you, I tried searching with term command, but search speed is still slow  
Hi @Cheng2Ready Yes, you just have to split each line of the field as a separate event, then you can use stats last to grab the last line: index=example "House*" Message=* | makemv Message | mvexpa... See more...
Hi @Cheng2Ready Yes, you just have to split each line of the field as a separate event, then you can use stats last to grab the last line: index=example "House*" Message=* | makemv Message | mvexpand Message | stats last(Message) as last_line
cloud instance 
They're on the same network, they're using intranet bandwidth, and they have 100MB bandwidth.
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sen... See more...
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sentence can be anything and unpredictable so just wanted to see if there is a way to grab the last line of log that is in the field. This example most likely wont help but paints a picture that I just want the last line. index=example |search "House*" |table Message log looks similar like this: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire. Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Goal: D://user/local/line500 Error : someone stepped on the wire. D://user/local/line980 ,indo Error : Simon said Look  I hope this makes sense....
From the screen shot, you have started the ALD session and the Summary showed you have successfully started  & stop CollectionCapture... unfortunately, no Java Collections were eligible for evaluatio... See more...
From the screen shot, you have started the ALD session and the Summary showed you have successfully started  & stop CollectionCapture... unfortunately, no Java Collections were eligible for evaluation. If you look at the middle section of the screen, it gives the explanation/details. To qualify for evaluation, a Collection must have a certain size and elements.  2 App Server Agent settings/parameters are mentioned: minimum-size-for-evaluation-in-mb - The default value is 5MB(I think). Depending on your application, you may want to increase of decrease this value. minimum-number-of-elements-in-collection-to-deep-size - The default is 1000 elements, maybe  large for an application(?). If your application is small and we’re not sure if any collection has about 1000 elements, we can try lowering this value. Next, is Start On Demand Capture Session: If Session Duration is too small, we may not have sufficient time window to capture those Collections (objects/classes). If the default 10 mins shows nothing, then try 15. If Collection Age is too small, this means the Collection is too “young” and hence the size may not be enough as candidate for evaluation. Go with default 2 mins. If all the criteria are good, you should see something below: thanks.
Do you have the link to this extensions?. I cannot find it.