All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I need to automate the execution of specific queries in Splunk Enterprise on a weekly basis, export the results as CSV files, and upload them to a designated SharePoint Online folder for vis... See more...
Hi All, I need to automate the execution of specific queries in Splunk Enterprise on a weekly basis, export the results as CSV files, and upload them to a designated SharePoint Online folder for visualization purposes. Based on your experience, what are the available options, and which one would you recommend as the best?   Thanks, John
It is absolute insanity that we continue to have this issue - Regex isn't that hard, but Splunk makes it harder by creating new rules and exceptions to those rules. Unfortunately this is why Splunk i... See more...
It is absolute insanity that we continue to have this issue - Regex isn't that hard, but Splunk makes it harder by creating new rules and exceptions to those rules. Unfortunately this is why Splunk is in the position they are in - not user-friendly, and lack of proper GUI features that allow testing before integration.
@robertlynch2020 to answer your composite field question: Creating composite fields is simply a pattern to join MV fields where you have an equal correlation between those fields, i.e. for your exam... See more...
@robertlynch2020 to answer your composite field question: Creating composite fields is simply a pattern to join MV fields where you have an equal correlation between those fields, i.e. for your example ... | fields traceId spanId parentSpanId start end | eval composite=mvzip(mvzip(mvzip(mvzip(traceId, spanId, "###"), parentSpanId, "###"), start, "###"), end, "###") | fields composite | mvexpand composite | eval tmp=split(composite, "###"), | eval traceId=mvindex(tmp, 0), spanId=mvindex(tmp, 1), parentSpanId=mvindex(tmp, 2), start=mvindex(tmp, 3), end=mvindex(tmp, 4) | fields - tmp composite so it's just a pattern that fits the scenario where using stats will not solve your problem. Note always use fields to ensure ONLY the fields you want expanded, so as to minimise memory usage - that also will mean using | fields - _time _raw as they will remain after a positive fields statement because they are _ prefixed fields so are not automatically excluded. Do NOT use table before an mvexpand as table causes the data to be sent to the search head, so the expansion is done on the SH. (There is a possibility that it will be optimised away, but don't rely on that). Explicitly use fields so that it remains in the indexing tier and if you have multiple indexers, the memory footprint will be distributed.  
Hi @MichaelM1  Ive been doing some experimenting here as genuinely interested in this kind of thing.  Can I just check - your IF is a heavy forwarder, not UF? I have a UF with the following inputs... See more...
Hi @MichaelM1  Ive been doing some experimenting here as genuinely interested in this kind of thing.  Can I just check - your IF is a heavy forwarder, not UF? I have a UF with the following inputs: [monitor:///var/log/will.txt] index=main sourcetype=debug _meta=GUIDe::"abc-123-def-456" [monitor:///var/log/will2.txt] index=main sourcetype=debug [monitor:///var/log/will3.txt] index=main sourcetype=debug _meta=GUIDe::"abc-123-def-456" ProjectID::"TestProject" I have a receiver (IDX in this case) with the following: == props.conf == #sourcetype=debug [debug] TRANSFORMS-debug=extract_GUIDe,setCustomMetadata == transforms.conf == [extract_GUIDe] SOURCE_KEY = _meta WRITE_META = true REGEX = GUIDe::([^\s]+) FORMAT = ExGUIDe::$1 [setCustomMetadata] INGEST_EVAL = GUIDe:=COALESCE(GUIDe,"NotSpecified"), ProjectID:=COALESCE(ProjectID,"NotSpecified") Walking through this a little, we have 3 inputs sending data from the UF with various combos of: will1 - GUIDe only will2 - no meta will3 - GUIDe and ProjectID  (I realise you're using Project_ID not ProjectID but I missed it before I started writing this up) When this is sent to Splunk I get the following:   Note that if GUIDe/ProjectID is missing it is replaced with "NotSpecified" as per the INGEST_EVAL The transform "extract_GUIDe" isnt actually required, It was just part of my Proof-of-concept and workings. The main thing here is the INGEST_EVAL - What its doing is eval-ing those fields to be a COALESCE of either the field which is sent (if it is) or the "NotSpecified" field (if it isnt).  So, this deals with incoming traffic from the UF. I then added another set of props/transforms to deal with all data: == props.conf == [default] TRANSFORMS-setCustomMetadata=setCustomMetadata [host::macdev] TRANSFORMS-setCustomMetadata=setThisHostMetadata == transforms.conf == # This is from the previous test [setCustomMetadata] INGEST_EVAL = GUIDe:=COALESCE(GUIDe,"NotSpecified"), ProjectID:=COALESCE(ProjectID,"NotSpecified") [setThisHostMetadata] INGEST_EVAL = GUIDe:=COALESCE(GUIDe,"999-999-999"), ProjectID:=COALESCE(ProjectID,"MyForwarderLayer") When I look at data originated on my receiver, which is called "macdev" (which would be your IF) we can see that its getting the 999-999-999 GUIDe stamped on it. If we look at data from any other host which doesnt have the GUIDe set at source then it uses the "NotSpecified" value from the INGEST_EVAL:   So I *think* this should help solve your problem? Its not completely clear how the receiver you are sending to is processing the GUIDe and dropping if not present, but it would be easy to do with INGEST_EVAL and checking for the presence of the GUIDe field, so I'm assuming its something like that? [checkGUIDePresent] INGEST_EVAL queue=IF(GUIDe!="",queue, nullQueue)   Anyway, I really hope this helps! Please let me know how you get on and consider adding karma to this or any other answer if it has helped, and also to accept an answer if it resolves your issue Regards Will 
I need to run a small JavaScript file (main.js) across multiple websites. These websites may or may not have Splunk RUM already running on them. Can I package the Splunk RUM SDK into my main is, so... See more...
I need to run a small JavaScript file (main.js) across multiple websites. These websites may or may not have Splunk RUM already running on them. Can I package the Splunk RUM SDK into my main is, so it only collects data/stats from my main.js only, rather than collecting it from the entire HTML page? For example I don't want it to collect all the API calls or JavaScript errors from the whole page, just my small bit.   Thanks
As an addition to what @livehybrid already said, see the .conf presentation https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf frozenTimePer... See more...
As an addition to what @livehybrid already said, see the .conf presentation https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf frozenTimePeriodInSecs only affects cold buckets. So a bucket has to first reach this stage in its life cycle. And hot buckets are rolled on a completely different basis than time-based retention limit. That's it. That's also why the usual questions like "how to make sure we have 2 days of hot buckets, a week of warm buckets and two months of cold buckets" get the response of "you can't do it this way".
https://splunkbase.splunk.com/app/3124 This app (Maps+ for Splunk) has option to draw paths.
You are wrong here. You're mixing rep/search factor with index searchability. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Howtomonitoracluster#Indexes_tab "Fully searchable. Is the ... See more...
You are wrong here. You're mixing rep/search factor with index searchability. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Howtomonitoracluster#Indexes_tab "Fully searchable. Is the index fully searchable? In other words, does it have at least one searchable copy of each bucket? If even one bucket in the index does not have a searchable copy, this field will report the index as non-searchable." About the original issue, the way to go would probably be to check buckets status - there should be a button on that tab to see buckets status in detail. As a side note - indexer cluster with just two nodes isn't really fault-tolerant. It's kinda like RAID-1 without a hot-spare - when you lose one node, you're in degraded state and you don't have anywhere to replicate to.
With stats you can still hit user's quota since it's not streaming but creates temp files which it later merges. Bit seriously - yes, since you're limited by memory constraints and mvexpand works... See more...
With stats you can still hit user's quota since it's not streaming but creates temp files which it later merges. Bit seriously - yes, since you're limited by memory constraints and mvexpand works in batches, there is a risk but that's why I advise stripping as much data as possible before mvexpanding.
Hi @secure  I noticed another reply on your other question similar to this pointed towards using "MVDiff Add-on For Splunk" which might help avoid some complex SPL searches. Shamelessly pinching @V... See more...
Hi @secure  I noticed another reply on your other question similar to this pointed towards using "MVDiff Add-on For Splunk" which might help avoid some complex SPL searches. Shamelessly pinching @VatsalJagani image from the last reply   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Also got the same issue during a Splunk upgrade, did a restart of the Splunk service and the KV store was fixed
let me try that again my regex was missing a ! I am also tring to remove the :: as I dont think it is necessary [addprojectid] REGEX = ^(?!.*Project_ID) FORMAT = Project_ID::123456 MV_ADD = true ... See more...
let me try that again my regex was missing a ! I am also tring to remove the :: as I dont think it is necessary [addprojectid] REGEX = ^(?!.*Project_ID) FORMAT = Project_ID::123456 MV_ADD = true SOURCE_KEY=_meta [addGUIDe] REGEX = ^(?!.*GUIDe) FORMAT = GUIDe::654321 MV_ADD = true SOURCE_KEY=_meta [addIntermediateForwarder] REGEX = .* FORMAT = IntermediateForwarder::XXXXXX MV_ADD = false SOURCE_KEY=_meta
| eval row=mvrange(0,max(mvcount(hostname1), mvcount(hostname2))) | mvexpand row | eval hostname1=mvindex(hostname1,row) | eval hostname2=mvindex(hostname2,row) | fields - row
Your original post used hostnames2 which I used in my suggestion. In your second post, you used hostname2 which is not the same field. Please retry with the correct field names.
Hi everyone i have a dataset | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1"... See more...
Hi everyone i have a dataset | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time i want the final output to be like below  APP1 hostname1 hostnames2 appdelta syzhost.domain1 syzhost.domain1 appdelta abchost.domain1 abchost.domain1 appdelta egfhost.domain1     any suggestions 
You are correct it was just a post typo I added the SOURCE_KEY=_meta to my transforms and it still is not working as expected.  The logs that from the UFs still get forwarded by the IF but he IF its... See more...
You are correct it was just a post typo I added the SOURCE_KEY=_meta to my transforms and it still is not working as expected.  The logs that from the UFs still get forwarded by the IF but he IF its self is no longer tagging its own logs and therefor getting rejected by the indexers. This is my transforms.conf now [addprojectid] REGEX = ^(?.*Project_ID::) FORMAT = Project_ID::123456 MV_ADD = true SOURCE_KEY=_meta [addGUIDe] REGEX = ^(?.*GUIDe::) FORMAT = GUIDe::654321 MV_ADD = true SOURCE_KEY=_meta [addIntermediateForwarder] REGEX = .* FORMAT = IntermediateForwarder::XXXXXX MV_ADD = false SOURCE_KEY=_meta
@livehybrid  tried your solution its not working i was able to resolve this using  | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.doma... See more...
@livehybrid  tried your solution its not working i was able to resolve this using  | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time | eval match=max(mvmap(hostname1, if(isnotnull(mvfind(hostname2, hostname1)), 1, hostname1))) | table APP1,hostname1,hostname2,match but now i have a additional issue for some hostnames is "no hosts" in that case also its just giving me 1 hostname  | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=("") | fields - _time | eval match=max(mvmap(hostname1, if(isnotnull(mvfind(hostname2, hostname1)), 1, hostname1))) | table APP1,hostname1,hostname2,match which is not right   
Hi @secure  Might not be perfect, but does this work? | makeresults | eval APP1="appdelta", list1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),list2=mvappend("syzhost.domain1",... See more...
Hi @secure  Might not be perfect, but does this work? | makeresults | eval APP1="appdelta", list1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),list2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time |stats values(list2) as list2 by list1 | foreach list2 mode=multivalue [|eval notInList=IF(<<ITEM>>==list1,<<ITEM>>,null())] | stats values(notInList) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi Brilliant and thanks :). It working very well.  
This worked for me  | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | field... See more...
This worked for me  | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time | eval match=max(mvmap(hostname1, if(isnotnull(mvfind(hostname2, hostname1)), 1, hostname1))) | table APP1,hostname1,hostname2,match