All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your second search should work - is it just a simple lookup e.g. not wildcard or CIDR match? Do you have a typo in your actual search?
Hello, I have a lookup file and I would like to use it to search a dataset and return a table showing each entry in the lookup file with a count of their number of matches in the main dataset includ... See more...
Hello, I have a lookup file and I would like to use it to search a dataset and return a table showing each entry in the lookup file with a count of their number of matches in the main dataset including displaying the entries from the lookup file which have a null value.   I've tried a number of techniques, the closet I have come is: dataFeedTypeId=AS [| inputlookup approvedsenders | fields Value | return 1000 $Value] | stats count as cnt_sender by sender But this only shows the lookup fileentries with non-zero values (n.b. I am manually adding 1000 to the return to make it work, that's a different problem) I have also tried: dataFeedTypeId=AS [ | inputlookup approvedsenders | fields Value] | stats count as cnt_sender by Value | append [ inputlookup approvedsenders | fields Value] | fillnull cnt_sender | stats sum(cnt_sender) as count BY Value This shows all the values in the lookup file but shows a zero count against each one.  Thank you in advance.  
Subsearches are limited to 50,000 events. Could this be the issue? Try running the search over a short time period e.g. 5 minutes? Assuming that is the issue, either reduce your time period to a lev... See more...
Subsearches are limited to 50,000 events. Could this be the issue? Try running the search over a short time period e.g. 5 minutes? Assuming that is the issue, either reduce your time period to a level that avoids the problem, or rewrite the search to not use subsearches i.e. remove the join.
Thanks
hai all i am using below search to get enrich a field StatusDescription using subsearch  when i was running sub search alone its gives me results for hostname and StatusDescription but using below... See more...
hai all i am using below search to get enrich a field StatusDescription using subsearch  when i was running sub search alone its gives me results for hostname and StatusDescription but using below by join StatusDescription field is getting empty values please correct me    index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections os=Linux | dedup hostname | rex field=hostname "(?<hostname>[^.]+)\." | eval age=(now()-_time) | eval LastActiveTime=strftime(_time,"%y/%m/%d %H:%M:%S") | eval Status=if(age<3600,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | table _time, hostname, sourceIp, Status, LastActiveTime, Age | join type=left hostname [ search index=index1 sourcetype="new_source1" | rename NodeName AS hostname | table hostname, StatusDescription ]
I went for list and mvdedup to preserve the order, if the order is not significant, then yes, values is just as good.
Great solution - I did not know about mvindex and mvrange! They seem like a usefull couple. instead of list -> mvdedup you can just use values.
Why not just ingest the log data into Dynatrace?
Do you just need a count of (distinct) users? | stats dc(user) as users
What do you mean by "stream processed"? This config stanza should produce XML-formatted evetns, not jsons. So something is actively fiddling with your data before it's ingested. You should check the... See more...
What do you mean by "stream processed"? This config stanza should produce XML-formatted evetns, not jsons. So something is actively fiddling with your data before it's ingested. You should check the config of that solution.
Switch one of them off
OK. In other words you _do_ want site_replication_factor = origin:1,site1:1, site2:1,total:2 and the same for site_search_factor. This will give you one copy at each site for total of two copies. ... See more...
OK. In other words you _do_ want site_replication_factor = origin:1,site1:1, site2:1,total:2 and the same for site_search_factor. This will give you one copy at each site for total of two copies. The distribution of buckets within a single site will be managed by CM. Just remember that within the "source" site the data will _not_ be moved from the indexer that you will be sending it to from your forwarders. Also - if you happen to lose one indexer within a single site, the cluster will try to replicate the buckets from the other site which might end badly (you will run out of space since you'll be merging data from two indexers into a single one).
You have a couple of complex and confusing searches - using appendcols does not guarantee that the data in the row relate to each other in a meaningful way. It is difficult to see how your expected r... See more...
You have a couple of complex and confusing searches - using appendcols does not guarantee that the data in the row relate to each other in a meaningful way. It is difficult to see how your expected result can be derived from your actual result. Perhaps if you shared some anonymised sample events, it might be clearer what you are dealing with and what you are trying to achieve.
We are facing very strange issue as the objects of specific Apps reverted back to old settings even the lookup files were impacted on our SHC this issue repeated more than once since upgrading Splunk... See more...
We are facing very strange issue as the objects of specific Apps reverted back to old settings even the lookup files were impacted on our SHC this issue repeated more than once since upgrading Splunk to v 9.0.4 Can you please help on this case ?
| table id x y z k | stats list(x) as x list(y) as y first(z) AS z first(k) as k BY id | eval x=mvdedup(x) | eval y=mvdedup(y) | eval xrange=mvrange(0,mvcount(x)) | mvexpand xrange | eval name="x".(x... See more...
| table id x y z k | stats list(x) as x list(y) as y first(z) AS z first(k) as k BY id | eval x=mvdedup(x) | eval y=mvdedup(y) | eval xrange=mvrange(0,mvcount(x)) | mvexpand xrange | eval name="x".(xrange+1) | eval {name}=mvindex(x,xrange) | eval yrange=mvrange(0,mvcount(y)) | mvexpand yrange | eval name="y".(yrange+1) | eval {name}=mvindex(y,yrange) | fields - x xrange y yrange name | stats values(*) as * by id
Thanks for that, as I stated rf/sf is 1 per site so total of 2 searchable copies due to costs.  I  want to know the best way to spread the buckets of an index over as many indexers as possible to g... See more...
Thanks for that, as I stated rf/sf is 1 per site so total of 2 searchable copies due to costs.  I  want to know the best way to spread the buckets of an index over as many indexers as possible to get the best bandwidth out of the I/O sub-system. Cheers  
So rather than have 2 volumes just have 1 and use tiered storage so that you only need to monitor the usage at the storage system. Make capacity planning much easier and hardware tiering is far more... See more...
So rather than have 2 volumes just have 1 and use tiered storage so that you only need to monitor the usage at the storage system. Make capacity planning much easier and hardware tiering is far more efficient/performant as accessed data will be elevated to  a higher performance tier.   
Copy the raw event and paste into a code block </>
Which fields are missing?
Please share some anonymised sample events in a code block for both source types demonstrating the common fields you want to use to correlate the events by.