All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Splunkerninja, In classic dashboards, you can install and use the Treemap visualization <https://splunkbase.splunk.com/app/3118>.
There are several patterns illustrated for use with renderXml = true and $XmlRegex: <Provider[^>]+Name=["']Microsoft-Windows-Security-Auditing["'] <EventID>4688<\/EventID> <Data Name=["']NewProces... See more...
There are several patterns illustrated for use with renderXml = true and $XmlRegex: <Provider[^>]+Name=["']Microsoft-Windows-Security-Auditing["'] <EventID>4688<\/EventID> <Data Name=["']NewProcessName["']>C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe<\/Data> <Data Name=["']ParentProcessName["']>C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe<\/Data> Recall that % was used as a start and end delimiter and is not part of the pattern.
Hi @AL3Z, Please read <https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats> carefully. If renderXml = false, yes, you can use EventCode an... See more...
Hi @AL3Z, Please read <https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats> carefully. If renderXml = false, yes, you can use EventCode and Message in your blacklist settings. It appears you have set the suppress_* settings to true. You should only set those to true if either (a) renderXml = true or (b) you want to exclude the fields from your events as illustrated by your image.
Interesting, it sounds like you have the energy to dig a little deeper. Take a look at these links https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html https:/... See more...
Interesting, it sounds like you have the energy to dig a little deeper. Take a look at these links https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html https://conf.splunk.com/files/2020/slides/TRU1143C.pdf which show how you can dive into debug logging and the search log - maybe that will throw up something useful.  
I am not sure what else to suggest - for some reason you have gone back to a 5 minute cron window with a 15 minute time range, which is something I earlier suggested you change. I also suggested usi... See more...
I am not sure what else to suggest - for some reason you have gone back to a 5 minute cron window with a 15 minute time range, which is something I earlier suggested you change. I also suggested using a specific earliest/latest time window, which you do not appear to be doing. It is also not clear what you meant in your original post about incidents coming in at 9:16 with events at 8:20 Unless you are able to give detail about events/times and specific detail of the problem, it is impossible for anyone to offer concrete advice that will help you. You would need to provide an example where the events are visible in Spunk at a certain time the cron schedule for the alert the time window for the alert search the run of the alert does not show the expected data  
You need to give clearer description.  For example, does the phrase "throw the result" mean to discard event when src_ip is found in file.csv or to only preserve matching events in order to raise an ... See more...
You need to give clearer description.  For example, does the phrase "throw the result" mean to discard event when src_ip is found in file.csv or to only preserve matching events in order to raise an alert?  Have you read command document for lookup? Here I will give an example assuming that your goal is actually to preserve matching events, and assuming that file.csv contains a single column malicious_ip. | lookup file.csv src_ip as malicious_ip output malicious_ip as matching | where isnotnull(matching)  
The problem is Splunk always flattens arrays.  The trick is to preserve logs{} as a vector before mvexpand. index="factory_mtp_events" | spath path=logs{} ``` alternative syntax: | spath logs{} ``` ... See more...
The problem is Splunk always flattens arrays.  The trick is to preserve logs{} as a vector before mvexpand. index="factory_mtp_events" | spath path=logs{} ``` alternative syntax: | spath logs{} ``` | mvexpand logs{} | search test_name="Sample Test1"
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every cha... See more...
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every change. Previously I had it set to a field in the report. When I upload a csv and use the correct sourcetype it assigns the current time to the report. When I upload a report via curl through the HEC endpoint it indexes it to the right time. Same thing when I run it through a simple script. But when the test pipeline runs, it indexes data to the timestamp that is in the report even though it is using the same sourcetype as the other tests I did. Is it possible to add a time field that overrides the sourcetype config? Is there a way to see the actual api request in the splunk internal logs?
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all ti... See more...
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all time".  Is there a way that I can have multiple defaults or some other constrain that doesn't cause this? Here's what I've been working on but it's not working. Any feedback would be helpful... <input type="time" token="t_time"> <default> <earliest>if(isnull($url.earliest$), "-15m@m", $url.earliest$)</earliest> <latest>if(isnull($url.latest$), "now", $url.latest$)</latest> </default> </input>
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I ... See more...
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I throw the result, I understand that you have to generate a lookup but I can not move any further
No, I don't believe so. 
You could try setting initial / default values for the filters Other than that, you may have to share the source of your dashboard for us to be able to suggest changes.
You can use UF (or HF) for directly monitoring network input (tcp or udp port). Be aware however of shotcomings of such solution. 1. You can only define one sourcetype for a given input so if you wa... See more...
You can use UF (or HF) for directly monitoring network input (tcp or udp port). Be aware however of shotcomings of such solution. 1. You can only define one sourcetype for a given input so if you want to listen for data from several different sources yoh have to either create multiple inputs or do some complicated index-time rewriting and rerouting. Not easy to maintain. 2. There used to be some performance problems compared to a specialized syslog daemon 3. You lose network-level metadata. So if you can live witn that, you can define a tcp or udp input and live with that. But it's not a recommended solution.
It's a stanza defining a type of input (powershell in this case) and a unique name for it. In some cases (for example the monitor inputs) input names also have functional meaning (monitor input name ... See more...
It's a stanza defining a type of input (powershell in this case) and a unique name for it. In some cases (for example the monitor inputs) input names also have functional meaning (monitor input name points to monitored files or directories on disk).
I'm a bit lost here. If you're doing vmotion between datastores why bothering with logical storage operations within vms?
Hi JP, Thank you for getting back to my message. I have gone through the documentation to enable HEC and generate the token. I also enabled SSL. This is part i am getting confused <protocol>://<hos... See more...
Hi JP, Thank you for getting back to my message. I have gone through the documentation to enable HEC and generate the token. I also enabled SSL. This is part i am getting confused <protocol>://<host>:<port>/<endpoint> in the documentation.  And i have it set as this - https://computer-name:8088/services/collector/event Is that correct?
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is ... See more...
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is waiting for input..." when selecting options from the filters.  The tables updated only when clicking on the other panels. how can i update the table from the filters ?   Thanks
Did you see any change in the data being ingested when you made the TRUNCATE value change?  Also, if you change it to something specific to test it, like 10237.  Does that limit it to 10237 bytes?  T... See more...
Did you see any change in the data being ingested when you made the TRUNCATE value change?  Also, if you change it to something specific to test it, like 10237.  Does that limit it to 10237 bytes?  This is mainly just to see if this particular TRUNCATE setting is what is limiting your data, and maybe we can help rule it out as the culprit so we know to dig further.
Based on your description it sounds like you are configuring the Blue Prism side of things to point at the HTTP Event Collector (HEC).  To get the URL and token, you'll need to make sure the HEC conf... See more...
Based on your description it sounds like you are configuring the Blue Prism side of things to point at the HTTP Event Collector (HEC).  To get the URL and token, you'll need to make sure the HEC configuration has been completed on the Splunk side so Blue Prism has a place to send its data. Here's the documentation that pertains to configuration of the HEC component in Splunk: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector 
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, a... See more...
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts? The current plan -   Event monitoring Suspend monitoring of alert on the indexers (Repeat following for each indexer, one at a time) Splunk Ops: Put Splunk Cluster in Maintenance Mode Stop Splunk service on One Indexer VM Ops vMotion the existing 2.5TB disk to any Unity datastores Provision new 2.5TB VM disk from the VSAN datastore Linux Ops Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old" Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data" Splunk Ops Restart Splunk service on indexer Take Indexer Cluster out of Maintenance Mode Review Cluster Master to confirm indexer is processing and rebalancing has started as expected Wait a few minutes to allow for Splunk to rebalance across all indexers (Return to top and repeat steps for next indexer) Splunk Ops: Validate service and perform test searches Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait Event monitoring Enable monitoring of alert on the indexers     In addition, Splunk PS suggested to use -   splunk offline --enforce-counts     Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation