All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But inc... See more...
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But include subdomains. Example: subdo1.baddomain.com subdo2.baddomain.com subdo1.baddomain2.com Please help, how to create that condition in spl query?
Is it possible to take splunk Admin certification after Splunk power user certification expired?
This is a different question to the one asked. How do you know the location of the servers and does the data for each panel come from the same search. If it comes from the same search then you would... See more...
This is a different question to the one asked. How do you know the location of the servers and does the data for each panel come from the same search. If it comes from the same search then you would be better of having a base search, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Savedsearches where your base search does all the data selection and aggregation and then each of the panels only shows the data from that base search that relate to the region of the server/clients they want.  
Hi @KendallW, I tried as you suggested but still it doesn't seem to work. Below is a part of my Dashboard code: "viz_myN1qvY3": {             "type": "splunk.table",             "dataSources"... See more...
Hi @KendallW, I tried as you suggested but still it doesn't seem to work. Below is a part of my Dashboard code: "viz_myN1qvY3": {             "type": "splunk.table",             "dataSources": {                 "primary": "ds_Ir18jYj7"             },             "title": "Availability By Market",             "options": {                 "backgroundColor": "transparent",                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)",                     "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)",                     "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",                     "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)"                 },                 "headerVisibility": "fixed",                 "fontSize": "small",                 "columnFormat": {                     "Availability": {                         "data": "> table | seriesByName(\"Availability\") | formatByType(AvailabilityColumnFormatEditorConfig)",                         "rowColors": "> table | seriesByName('Availability') | pick(AvailabilityRowColorsEditorConfig)",                         "rowBackgroundColors": "> table | seriesByName(\"Availability\") | rangeValue(AvailabilityRowBackgroundColorsEditorConfig)",                         "align": "center"                     }                 }             },             "context": {                 "AvailabilityColumnFormatEditorConfig": {                     "number": {                         "thousandSeparated": false,                         "unitPosition": "after",                         "precision": 2                     }                 } The Availability column still has values aligned to right.  
Hi @Joshua2 , as also @KendallW said, this isn't the way to work of Splunk: you cannot locally store data n an UF. UF has a local cache that stores data if the Indexers aren't available, but only f... See more...
Hi @Joshua2 , as also @KendallW said, this isn't the way to work of Splunk: you cannot locally store data n an UF. UF has a local cache that stores data if the Indexers aren't available, but only for a few time and it isn't possible to copy cached logs in an usb drive. You should review your requirements with a Splunk Certified Architect or a Splunk Professional Services specialist to find a solution: e.g. send logs to a local syslog or copy them in text files (using a script) and then store them in the usb drive, but as I said, this solution must be designed by an expert, this isn't a question for the Community. Ciao. Giuseppe
Hi @sidnakvee , If you don't see any other host in _internal, this means that your pcs aren't connected to Splunk Cloud. as descibed at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data... See more...
Hi @sidnakvee , If you don't see any other host in _internal, this means that your pcs aren't connected to Splunk Cloud. as descibed at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsingforwardingagentsCloud, you have to download the Splunk Forwarder app from Splunk Cloud that contains credentials and configurations to connect to your Splunk Cloud instance. so the sequence of activity will be: install, Splunk Universal Forwarder on your pc, download and install the Splunk Forwarder app from your Splunk Cloud instance, download and install Splunk _TA_Windows ad Splunk App for sysmon from apps.splunk.com. enable wanted inputs in both the apps, enable sysmon on your pc, probably you need to restart Splunk on the Forwarder. Let me know. Ciao. Giuseppe
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the config... See more...
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the configuration until disable the prefered captain on my search head, but it still not resolved. I also know there is an app for this, but just want to clarify are this option still supported or not. Here my POST argument URL: https://otx.alienvault.com/taxii/discovery POST Argument: collection="user_otx" taxii_username="API key" taxii_password="foo" But the download status keep on TAXII feed pooling starting, and when i check on the PID information  status="This modular input does not execute on search head cluster member" msg="will_execute"="false" config="SHC" msg="Deselected based on SHC primary selection algorithm" primary_host="None" use_alpha="None" exclude_primary="None"  
As per @ITWhisperer 's comment, yes it is case sensitive. Use eval upper or lower to convert them all to the same case
Hi @Joshua2 I won't judge the solution design, but the intended use of the Universal Forwarder is to forward logs, not store them locally for later manual transfer.  You can do this with the UF by s... See more...
Hi @Joshua2 I won't judge the solution design, but the intended use of the Universal Forwarder is to forward logs, not store them locally for later manual transfer.  You can do this with the UF by setting up local indexing on each machine, however you would have to pay for license usage as the data is indexed at the UF tier, and then again pay for license usage when it is transferred and indexed to your Splunk Enterprise instance later. So you'd be paying twice to index the same data.  Also note there are performance implications for local indexing, and there are very limited parsing options on the UF, so you'd need to set up parsing later at the indexer anyway.  If you're ok with that option, you can do it by setting the indexAndForward setting in ouptuts.conf: [tcpout] defaultGroup = local_indexing [tcpout:local_indexing] indexAndForward = true   A better option to store the logs locally would be to use a third party log collection tool like Fluentd or LogStash, or write your own Powershell scripts. Ideally you would use Splunk for its intended purpose by directly forwarding the logs from the 60 UFs to a Splunk indexer (or HF), however I understand that may not be possible in this case. 
A warm bucket will not be evicted if it is too new on the premise that new data is more likely to be searched than old data.  "new" is defined by hotlist_recency_secs  and hotlist_bloom_filter_recenc... See more...
A warm bucket will not be evicted if it is too new on the premise that new data is more likely to be searched than old data.  "new" is defined by hotlist_recency_secs  and hotlist_bloom_filter_recency_hours  in indexes.conf. Urgent mode eviction comes into play when there are not enough files eligible for normal eviction.  In urgent mode, the hotlist_recency_secs and hotlist_bloom_filter_recency_hours settings are ignored.
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the cli... See more...
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the clients (3 total). Whereas UK panel shows the identical thing, but only the 2 clients 
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR... See more...
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR | snowincidentalert  
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be tra... See more...
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be transfered to a USB for archiving and indexed into splunk for NIST800 compliance eg login success/failure. I am struggling to find the correct syntax for the UF to save locally as it asks for a host and Port.   josh
This can be quite challenging, because if you then remove one of the sourcetypes from the auto-populated multiselect then add a new index selection it has to do some fiddly calculations to find out w... See more...
This can be quite challenging, because if you then remove one of the sourcetypes from the auto-populated multiselect then add a new index selection it has to do some fiddly calculations to find out which of the indexes has changed and then only add the new sourcetypes to the current  sourcetype selection. It can be done,  but it's quite a messy thing to play with given all the possible permutations of what you can do. Here's an example of simply adding sourcetypes to the auto select - hopefully it will give you some pointers. I've put a bunch of search panels that can show you what's going on. <form version="1.1" theme="light"> <label>AutoSelectMulti</label> <init> <set token="pre_indexes"></set> </init> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="multiselect" token="tok_indexes" searchWhenChanged="true"> <label>Indexes</label> <prefix>index IN (</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search base="base_defs"> <query/> </search> </input> <input type="multiselect" token="tok_sourcetype" searchWhenChanged="true"> <label>Sourcetypes</label> <prefix>sourcetype IN (</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <search base="base_defs"> <query> | stats count by sourcetype </query> </search> <fieldForLabel>sourcetype</fieldForLabel> <fieldForValue>sourcetype</fieldForValue> </input> </panel> <panel> <table> <search id="base_defs"> <query>| tstats values(sourcetype) as sourcetype count where index=* OR index=_* by index</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">2</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <table> <search base="base_defs"> <done> <eval token="form.tok_sourcetype">$result.form_tok_sourcetype$</eval> <set token="tok_sourcetype">$result.tok_sourcetype$</set> </done> <query>| search $tok_indexes$ | stats values(sourcetype) as sourcetype | eval form_tok_sourcetype=sourcetype | eval tok_sourcetype="sourcetype in (\"".mvjoin(sourcetype, "\",\"")."\")"</query> </search> <option name="count">2</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Hi @Viral_G, Try adding "align": "center" in the columnFormat tag of your tables, e.g. "columnFormat": { "identity": { "data": "> table | seriesByName(\"identity\") | formatByType(identityC... See more...
Hi @Viral_G, Try adding "align": "center" in the columnFormat tag of your tables, e.g. "columnFormat": { "identity": { "data": "> table | seriesByName(\"identity\") | formatByType(identityColumnFormatEditorConfig)", "align": "center" }, "usernames": { "data": "> table | seriesByName(\"usernames\") | formatByType(usernamesColumnFormatEditorConfig)", "align": "center" } }  
Hi @FeatureCreeep try setting the 'param.is_use_event_time' in alert_actions.conf as discussed in this doc: https://lantern.splunk.com/Observability/Product_Tips/IT_Service_Intelligence/Configuring_... See more...
Hi @FeatureCreeep try setting the 'param.is_use_event_time' in alert_actions.conf as discussed in this doc: https://lantern.splunk.com/Observability/Product_Tips/IT_Service_Intelligence/Configuring_notable_event_timestamps_to_match_raw_data 
Hi @arunsoni, For this data you can break the events before each timestamp appears. Splunk probably won't be able to handle the JSON field extraction because of the preamble line, so just extract... See more...
Hi @arunsoni, For this data you can break the events before each timestamp appears. Splunk probably won't be able to handle the JSON field extraction because of the preamble line, so just extract the fields at search time. props.conf : SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ MAX_TIMESTAMP_LOOKAHEAD = 25   search-time field extraction regex: (?<abrMode>[^"]+)"abrMode":"(?<abrMode>[^"]+)","abrProto":"(?<abrProto>[^"]+)","event":"(?<event>[^"]+)","sUrlMap":"(?<sUrlMap>[^"]+)","sc":\{"Host":"(?<Host>[^"]+)","OriginMedia":"(?<OriginMedia>[^"]+)","URL":"(?<URL>[^"]+)"\},"sm":\{"ActiveReqs":(?<ActiveReqs>\d+),"ActiveSecs":(?<ActiveSecs>\d+),"AliveSecs":(?<AliveSecs>\d+),"MediaSecs":(?<MediaSecs>\d+),"SpanReqs":(?<SpanReqs>\d+),"SpanSecs":(?<SpanSecs>\d+)\},"swnId":"(?<swnId>[^"]+)","wflow":"(?<wflow>[^"]+)"
So this issue was being caused by an fapolicyd deny all rule. Once I moved the rule out of the /etc/fapolicyd/rules.d, it let me upgrade Splunk.
Thank you, after my investigation of the problem, this is a super sparse search. I need to add IOPS to solve this problem. I raised the IOPS to 25,000. The search speed has changed amazing. It's done!
Thank you for your reply. I have 1 billion incidents every day ingesting the SPLUNK indexer. I checked the monitoring console. I didn't seem to see any abnormalities.