All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES... See more...
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES instance hosted in Splunk Cloud. Within the cn=* field, multiplies for both IP and hostnames. We aim for host fields to be either hostname or nt_host. some of these values though are written as such: cn=192_168_1_1   I want to evaluate the existing field and output them as normal decimals when seen. I am assuming I would need an if statement keeping intact hostname values while else performing the conversion. I am not at computer right now but will update with some data and my progress thus far.   Thanks!  
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=h... See more...
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) as hosts_reporting by Workday_Location | append [ | inputlookup fw_asset_lookup.csv where ComponentCategory="Firewall*" | stats count as expected_hosts by WorkDay_Location ] | stats values(*) as * by WorkDay_Location | eval diff=expected_hosts - hosts_reporting so you do your basic search to count the reporting hosts and then add on the list of hosts you expect to see and then join them together with the last stats and then calc the difference
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return ... See more...
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return a negative match. It has nothing to do with fuzziness.
This is excellent app for finding old presentations and video sessions https://splunkbase.splunk.com/app/3330
When you say fuzzy, do you mean it should match based on similarity using something like Levenshtein distance? Do you want 123 main street 123 maine street 123 cain street all to match. I have u... See more...
When you say fuzzy, do you mean it should match based on similarity using something like Levenshtein distance? Do you want 123 main street 123 maine street 123 cain street all to match. I have used this app https://splunkbase.splunk.com/app/5237 to do fuzzy lookup, but even on small lookups of a few hundred rows, it is very slow - I'm expecting that this is going to run on the search head, due to KV store, so you're going to be doing serial processing. What size is your lookup - you may well be hitting the default limits defined (25MB) https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Limitsconf#.5Blookup.5D What are you currently doing to be 'fuzzy' so your matches currently work or are you really looking for exact matches somewhere in your data? Is your KV store currently being updated - and is it replicated? It's probably not replicated, so all the work is on the SH - if you turn on replication (NB: I am not sure how the process works exactly), but the store will get replicated to the indexers as a CSV and if you have multiple indexers, you may benefit from parallel processing. Also, if you are just looking at some exact match somewhere, then the KV store may benefit from using accelerated fields - that can speed up lookups against the KV store (if that's the way you're doing it) significantly. https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Collectionsconf What's your search - maybe that can be optimised as well.  
@KyleMika Hello! What version of splunk are you on? You might find something in this: https://www.splunk.com/en_us/blog/platform/new-year-new-dashboard-studio-features-what-s-new-in-8-2-2201.html?loc... See more...
@KyleMika Hello! What version of splunk are you on? You might find something in this: https://www.splunk.com/en_us/blog/platform/new-year-new-dashboard-studio-features-what-s-new-in-8-2-2201.html?locale=en_us Did you checkout Feature section here? the https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/DashStudio/IntroFrame#Dashboard_features   If this Helps, Please Upvote.  
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewa... See more...
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewalls are logging to splunk through searching the firewall indexes to validate they are logging. I would like to combine them, and have a 3rd column that shows the difference.  I run into problems with multisearch since I am using a lookup (via inputlookup), and another lookup where I search for firewalls by hostname, and if the hostname contains a certain naming convention it matches the hostname to a lookup file with the hostname to WorkDay_Location. FIREWALLS FROM INVENTORY - by Workday Location | inputlookup fw_asset_lookup.csv | search ComponentCategory="Firewall*" | stats count by WorkDay_Location FIREWALLS LOGGING TO SPLUNK - by Workday Location index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) by Workday_Location | sort Workday_Location Current output: Table 1: Firewalls from Inventory Search WorkDay_Location   count Location_1                   5 Location_2                   5 Table 2: Firewalls Logging to Splunk search WorkDay_Location  count Location_1                  3 Location_2                  5 Desired output WorkDay_Location      FW_Inventory      FW_Logging      Diff Location_1                      5                                 3                            2 Location_2                      5                                 5                            0 Appreciate any help if this is possible.
Yup. +1 on eventstats. Stats will aggregate all data leaving you with just max value. Appendpipe will append stats at the end but you'll still have them as a separate entity. You could use subsearch ... See more...
Yup. +1 on eventstats. Stats will aggregate all data leaving you with just max value. Appendpipe will append stats at the end but you'll still have them as a separate entity. You could use subsearch but it would be ugly and ineffective (you'd have to run the main search twice effectively). Eventstats it is. But since eventstats has limitations, you can cheat a little. | sort - title1 title4 | dedup title1 It doesn't replace eventstats in a general case but for max or min value it might be a bit quicker than eventstats and will almost surely have lower memory footprint.
@gcusello a couple of ways with eventstats | makeresults count=300 | fields - _time | eval title1="Title".mvindex(split("ABC",""), random() % 3) | eval value=random() % 100 | eval title4="Title4-".m... See more...
@gcusello a couple of ways with eventstats | makeresults count=300 | fields - _time | eval title1="Title".mvindex(split("ABC",""), random() % 3) | eval value=random() % 100 | eval title4="Title4-".mvindex(split("ZYXWVUTSRQ",""), random() % 10) ``` Data creation above ``` | eventstats max(value) as max_val by title1 | stats values(eval(if(value=max_val, title4, null()))) as title4 max(max_val) as max_val by title1 Or depending on your title4 data you can put in another stats, i.e. after the data set up above, do ``` Reduce the data first before the eventstats ``` | stats max(value) as max_val by title1 title4 | eventstats max(max_val) as max by title1 | stats values(eval(if(max_val=max, title4, null()))) as title4 max(max) as max by title1  This way the eventstats works on a far smaller dataset, depending on your cardinality
I feel content accepting that bundle checksums are by design present to visualise the different states and not to detect modifications between validation and application. Thank you for the link thoug... See more...
I feel content accepting that bundle checksums are by design present to visualise the different states and not to detect modifications between validation and application. Thank you for the link though, I will definetly have a look at the presentation. With a little luck it might even be available online somewhere. All the best
While I still find this "counter intuitive" speaking of checksums, I understand that this is by design. The validated bundle is modified upon application and hence the checksum changes. This is not ... See more...
While I still find this "counter intuitive" speaking of checksums, I understand that this is by design. The validated bundle is modified upon application and hence the checksum changes. This is not to prevent pushing bundles that have been modified after validation but rather to keep track of states of changes/bundles. Thank you for the explanation
Every bucket has to store every dimension value once, so if you are using a million unique IDs to reference combinations of less than a million unique dimension strings, you are making the situation ... See more...
Every bucket has to store every dimension value once, so if you are using a million unique IDs to reference combinations of less than a million unique dimension strings, you are making the situation worse. Using KV Store is a great idea for repetitive asset information, like adding context to a hostname, but in this situation you should still store the meaningful unique identifier (hostname) as a dimension. I believe your best solution will be some combination of dimensions and KV Store to enrich them, but don't go 100% in either direction, and if you start creating new unique keys to make it work I think it's going too far. The only other suggestion I have is if you have large logic groups of systems without overlapping dimensions, you could put them into separate indexes and use wildcards in your index filter to access them all. Will keep the TSIDX smaller and performance higher.
Have you looked at Factory Talk, Fiix, or Plex? They are priced software packages and my company sells them. I actually am looking to develop a Splunk solution for PLC's as I am a long time Splunk de... See more...
Have you looked at Factory Talk, Fiix, or Plex? They are priced software packages and my company sells them. I actually am looking to develop a Splunk solution for PLC's as I am a long time Splunk developer. I also just realized this question is 3.5 years old, ha!
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with s... See more...
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with someone else by sharing the current url, however the url does not seem to contain the information this time, and every time I share or reload it reverts to defaults. What is the solution here to share?
@Aresndiz Checkout https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts .You can control alert throttling behavior using these settings in your saved search:   alert.suppress = ... See more...
@Aresndiz Checkout https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts .You can control alert throttling behavior using these settings in your saved search:   alert.suppress = 1 alert.suppress.period = <time-value> For your specific case of wanting to suppress similar results but trigger on changes, you can also use: alert.suppress.fields = <comma-separated-field-list> This tells Splunk to only suppress alerts if the specified fields contain the same values. If the values change, a new alert will trigger even if it's within the suppression period. This gives you the balance of avoiding duplicate notifications while still catching important changes. If this helps, Please Upvote.
https://community.splunk.com/t5/Getting-Data-In/Route-event-data-to-different-target-groups-issue/td-p/366461
Have you configured tcp input in HF or are you using separate syslog (like rsyslog or syslog-ng) to receiving those? If you are using HF then just add real syslog server and collect logs from local fi... See more...
Have you configured tcp input in HF or are you using separate syslog (like rsyslog or syslog-ng) to receiving those? If you are using HF then just add real syslog server and collect logs from local files.
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried usi... See more...
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried using attributes and resource processors to add the source info, then enabled those processors in the pipelines in the agent_config.yaml. In gateway_config,yaml, I added the processors with from_attribute to read from agent's attribute. But I couldn't add additional source tags of my metric. Can anyone help here? Let me know if you need more info. I can share. Thanks, Naren
Can you post your db input configuration? Add it into block </> (editor box).
@Brett have you any answers to this?