All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @zksvc , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata or https://docs.splunk.com/Documentation/Splunk/9.3.2... See more...
Hi @zksvc , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata or https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata there are also many videos to explain this. in few words: enable Splunk to receive logs, install Unioversal Forwarder on the windows system install Splunk_TA_Windows on the Universal Forwarder enable inputs on the Splunk_TA_Windows Ciao. Giuseppe  
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or someth... See more...
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or something else. But unfortunately when it trigger i can't see anything like the result. Maybe i need to config something in my Windows or Something ? 
You can use the regex approach as @gcusello suggested, with a small modification:   | rex field=alert.alias "(?<field1>[^_]+(_[^_]+){2})_(?<field2>.+)"   Because the string is strictly formatted,... See more...
You can use the regex approach as @gcusello suggested, with a small modification:   | rex field=alert.alias "(?<field1>[^_]+(_[^_]+){2})_(?<field2>.+)"   Because the string is strictly formatted, you can also use split to achieve the same.  Depending on number of events you handle, the following could be more economical.   | eval elements = split('alert.alias', "_") | eval field1 = mvjoin(mvindex(elements, 0, 2), "_"), field2 = mvjoin(mvindex(elements, 2, -1), "_")   Here is an emulation:   | makeresults format=csv data="alert.alias STORE_8102_BOXONE_MX_8102 STORE_8102_BOXONE_MX_8102_01"   Either of the above searches gives alert.alias field1 field2 STORE_8102_BOXONE_MX_8102 STORE_8102_BOXONE BOXONE_MX_8102 STORE_8102_BOXONE_MX_8102_01 STORE_8102_BOXONE BOXONE_MX_8102_01
Sure.  startswith and endwith can also be sophisticated.   | rename "Log text" as LogText | transaction maxspan=120s startswith=eval(match(LogText, "\bdisconnected\b")) endswith=eval(match(LogText,... See more...
Sure.  startswith and endwith can also be sophisticated.   | rename "Log text" as LogText | transaction maxspan=120s startswith=eval(match(LogText, "\bdisconnected\b")) endswith=eval(match(LogText, "\bconnected\b")) keeporphans=true | where isnull(closed_txn)   Here is an emulation   | makeresults format=csv data="Row, _time, Log text 1, 7:00:00am, text connected\ntext 2, 7:30:50am, text\ndisconnected\n\ntext 3, 7:31:30am, text connected\ntext 4, 8:00:10am, text\ndisconnected\n\ntext 5, 8:10:30am, text\ndisconnected\n\ntext" | eval _time = strptime(_time, "%I:%M:%S%p"), "Log text" = replace('Log text', "\\\n", " ") | sort - _time ``` data emulation above ```   The above search gives LogText Row _raw _time closed_txn duration eventcount field_match_sum linecount text disconnected text 5 1 2024-12-18 08:10:30           text disconnected text 4 1 2024-12-18 08:00:10          
I don't think I understand what you're trying to do. $Beta status:result._statusNumber$ is a token set by your search, "Beta status", and therefore has no default value. The screenshot you've shown i... See more...
I don't think I understand what you're trying to do. $Beta status:result._statusNumber$ is a token set by your search, "Beta status", and therefore has no default value. The screenshot you've shown is for setting tokens when users click on a visualisation. The two things are not related, really, other than how they are used in source code.  What issue are you trying to solve, exactly? If the token isn't working, have you made sure you've checked the "Access search results or metadata" box in the data source config? 
@KyleMika Heya!  This works for me, at least as I understand what you are trying to do. This is the URL (anonymised) for a dashboard I'm using:  https://xxxxx.splunkcloud.com/en-US/app/<app_nam... See more...
@KyleMika Heya!  This works for me, at least as I understand what you are trying to do. This is the URL (anonymised) for a dashboard I'm using:  https://xxxxx.splunkcloud.com/en-US/app/<app_name>/<dashboard>?form.LHN=LHN&form.dd_qw5k3GKd=*&form.filter=age&form.l1=*&form.l2=*&form.l3=*&form.rm=*&form.tok_asset_name=*&form.tok_rm=CORE&form.tok_specialty=*&form.tok_tier=*&form.tok_test=nope The last input is in the canvas, the rest are above. All are successfully reproduced if I share it.  Can you share the URL that you are seeing/sharing with others? 
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that p... See more...
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that pattern: "^>.*" is valid for the property: options.fontSize. Is that actually enabled in DS, does anyone know? In other words, can I put a selector/formatting function in (for example, formatByType) and have the fontSize selected based on whether the column is a number or text type? If so, what's the syntax for the context definition? For example, is there a way to make this work? "fontSize": ">table | frameBySeriesTypes(\"number\",\"string\") | formatByType(fontPickerConfig)" (If not, there should be!) Thanks!
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1"... See more...
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1" | fields _time field_2 field_3 For both the queries, auto extracted fields are added. ( _time, field_2, field_3). These are general questions for better understanding,  I would like to get suggestions in which scenario which usage (tstas, datamodel, root event , root search with streaming command, root search without streaming command) is preferrable? 1. |datamodel datamodelname datasetname | stats count by field_3 For Query 1, the output is pretty fast just below 10 seconds. (root search with out streaming command) For Query 2, the output is more than 100 seconds. (root search with streaming command) 2. For Query 2, tstats command is also taking more than 100 seconds and only giving results when added summariesonly=false, why is it not giving results when summariesonly=true is added? For Query 1, it works both summariesonly=false and true , and the output is pretty fast less than 2 seconds actually. So, in what scenario is it mentioned that streaming commands in root search can be added and accerlated, when in return it is querying by adding fields twice which is becoming more inefficient? eg : This is for Query 2 | datamodel datamodelname datasetname | stats count by properties.ActionType underlying query that is running : (index=* OR index=_*) index=abc sourcetype="xyz" field_1="1" _time=* DIRECTIVES(READ_SUMMARY(datamodel="datamodelname.datasetname" summariesonly="false" allow_old_summaries="false")) | fields "_time" field_2 field_3 | search _time = * | fields "_time" field_2 field_3 | stats count by properties.ActionType 3. And in general what is recommended - when a datamodel is accerlated, using either of them | datamodel or | tstats gives better performance. - when a datamodel is not accerlated, using | tstats only gives better performance.  Is this correct? 4. And when a datamodel is not accerlated, the command | datamodel pulls the data from _raw buckets, then what is the use of querying the data using datamodel instead of direct index? When the performance is same? 5. And while querying | datamodel datamodelname datasetname why is splunk by default adding ( index=* and index=_*)? It can be changed?
Something like this? | makeresults format=csv data="hostname cn=192_168_1_1 cn=myhost otherhostnane" | rex field=hostname "cn=(?<ipAddr>\d{1,3}[._]\d{1,3}[._]\d{1,3}[._]\d{1,3})" | eval hostname=coa... See more...
Something like this? | makeresults format=csv data="hostname cn=192_168_1_1 cn=myhost otherhostnane" | rex field=hostname "cn=(?<ipAddr>\d{1,3}[._]\d{1,3}[._]\d{1,3}[._]\d{1,3})" | eval hostname=coalesce(replace(ipAddr, "_", "."), hostname)
@Travlin1 something like this?   | makeresults | eval cn=mvappend( "192_168_1_1", "10_0_0_5", "webserver-prod01", "172_16_32_1", "database.example.com", "192_168_0_badformat", "dev_server_01" ) | m... See more...
@Travlin1 something like this?   | makeresults | eval cn=mvappend( "192_168_1_1", "10_0_0_5", "webserver-prod01", "172_16_32_1", "database.example.com", "192_168_0_badformat", "dev_server_01" ) | mvexpand cn | eval converted_host=case( match(cn, "^\d+_\d+_\d+_\d+$"), replace(cn, "_", "."), true(), cn ) | eval host_type=case( match(cn, "^\d+_\d+_\d+_\d+$"), "ip_address", true(), "hostname" ) | table cn, converted_host, host_type         If this helps, Please Upvote.
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending   Since events are in the same index and more events I do not  want use join.
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES... See more...
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES instance hosted in Splunk Cloud. Within the cn=* field, multiplies for both IP and hostnames. We aim for host fields to be either hostname or nt_host. some of these values though are written as such: cn=192_168_1_1   I want to evaluate the existing field and output them as normal decimals when seen. I am assuming I would need an if statement keeping intact hostname values while else performing the conversion. I am not at computer right now but will update with some data and my progress thus far.   Thanks!  
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=h... See more...
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) as hosts_reporting by Workday_Location | append [ | inputlookup fw_asset_lookup.csv where ComponentCategory="Firewall*" | stats count as expected_hosts by WorkDay_Location ] | stats values(*) as * by WorkDay_Location | eval diff=expected_hosts - hosts_reporting so you do your basic search to count the reporting hosts and then add on the list of hosts you expect to see and then join them together with the last stats and then calc the difference
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return ... See more...
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return a negative match. It has nothing to do with fuzziness.
This is excellent app for finding old presentations and video sessions https://splunkbase.splunk.com/app/3330
When you say fuzzy, do you mean it should match based on similarity using something like Levenshtein distance? Do you want 123 main street 123 maine street 123 cain street all to match. I have u... See more...
When you say fuzzy, do you mean it should match based on similarity using something like Levenshtein distance? Do you want 123 main street 123 maine street 123 cain street all to match. I have used this app https://splunkbase.splunk.com/app/5237 to do fuzzy lookup, but even on small lookups of a few hundred rows, it is very slow - I'm expecting that this is going to run on the search head, due to KV store, so you're going to be doing serial processing. What size is your lookup - you may well be hitting the default limits defined (25MB) https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Limitsconf#.5Blookup.5D What are you currently doing to be 'fuzzy' so your matches currently work or are you really looking for exact matches somewhere in your data? Is your KV store currently being updated - and is it replicated? It's probably not replicated, so all the work is on the SH - if you turn on replication (NB: I am not sure how the process works exactly), but the store will get replicated to the indexers as a CSV and if you have multiple indexers, you may benefit from parallel processing. Also, if you are just looking at some exact match somewhere, then the KV store may benefit from using accelerated fields - that can speed up lookups against the KV store (if that's the way you're doing it) significantly. https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Collectionsconf What's your search - maybe that can be optimised as well.  
@KyleMika Hello! What version of splunk are you on? You might find something in this: https://www.splunk.com/en_us/blog/platform/new-year-new-dashboard-studio-features-what-s-new-in-8-2-2201.html?loc... See more...
@KyleMika Hello! What version of splunk are you on? You might find something in this: https://www.splunk.com/en_us/blog/platform/new-year-new-dashboard-studio-features-what-s-new-in-8-2-2201.html?locale=en_us Did you checkout Feature section here? the https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/DashStudio/IntroFrame#Dashboard_features   If this Helps, Please Upvote.  
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewa... See more...
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewalls are logging to splunk through searching the firewall indexes to validate they are logging. I would like to combine them, and have a 3rd column that shows the difference.  I run into problems with multisearch since I am using a lookup (via inputlookup), and another lookup where I search for firewalls by hostname, and if the hostname contains a certain naming convention it matches the hostname to a lookup file with the hostname to WorkDay_Location. FIREWALLS FROM INVENTORY - by Workday Location | inputlookup fw_asset_lookup.csv | search ComponentCategory="Firewall*" | stats count by WorkDay_Location FIREWALLS LOGGING TO SPLUNK - by Workday Location index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) by Workday_Location | sort Workday_Location Current output: Table 1: Firewalls from Inventory Search WorkDay_Location   count Location_1                   5 Location_2                   5 Table 2: Firewalls Logging to Splunk search WorkDay_Location  count Location_1                  3 Location_2                  5 Desired output WorkDay_Location      FW_Inventory      FW_Logging      Diff Location_1                      5                                 3                            2 Location_2                      5                                 5                            0 Appreciate any help if this is possible.
Yup. +1 on eventstats. Stats will aggregate all data leaving you with just max value. Appendpipe will append stats at the end but you'll still have them as a separate entity. You could use subsearch ... See more...
Yup. +1 on eventstats. Stats will aggregate all data leaving you with just max value. Appendpipe will append stats at the end but you'll still have them as a separate entity. You could use subsearch but it would be ugly and ineffective (you'd have to run the main search twice effectively). Eventstats it is. But since eventstats has limitations, you can cheat a little. | sort - title1 title4 | dedup title1 It doesn't replace eventstats in a general case but for max or min value it might be a bit quicker than eventstats and will almost surely have lower memory footprint.
@gcusello a couple of ways with eventstats | makeresults count=300 | fields - _time | eval title1="Title".mvindex(split("ABC",""), random() % 3) | eval value=random() % 100 | eval title4="Title4-".m... See more...
@gcusello a couple of ways with eventstats | makeresults count=300 | fields - _time | eval title1="Title".mvindex(split("ABC",""), random() % 3) | eval value=random() % 100 | eval title4="Title4-".mvindex(split("ZYXWVUTSRQ",""), random() % 10) ``` Data creation above ``` | eventstats max(value) as max_val by title1 | stats values(eval(if(value=max_val, title4, null()))) as title4 max(max_val) as max_val by title1 Or depending on your title4 data you can put in another stats, i.e. after the data set up above, do ``` Reduce the data first before the eventstats ``` | stats max(value) as max_val by title1 title4 | eventstats max(max_val) as max by title1 | stats values(eval(if(max_val=max, title4, null()))) as title4 max(max) as max by title1  This way the eventstats works on a far smaller dataset, depending on your cardinality