All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I went for list and mvdedup to preserve the order, if the order is not significant, then yes, values is just as good.
Great solution - I did not know about mvindex and mvrange! They seem like a usefull couple. instead of list -> mvdedup you can just use values.
Why not just ingest the log data into Dynatrace?
Do you just need a count of (distinct) users? | stats dc(user) as users
What do you mean by "stream processed"? This config stanza should produce XML-formatted evetns, not jsons. So something is actively fiddling with your data before it's ingested. You should check the... See more...
What do you mean by "stream processed"? This config stanza should produce XML-formatted evetns, not jsons. So something is actively fiddling with your data before it's ingested. You should check the config of that solution.
Switch one of them off
OK. In other words you _do_ want site_replication_factor = origin:1,site1:1, site2:1,total:2 and the same for site_search_factor. This will give you one copy at each site for total of two copies. ... See more...
OK. In other words you _do_ want site_replication_factor = origin:1,site1:1, site2:1,total:2 and the same for site_search_factor. This will give you one copy at each site for total of two copies. The distribution of buckets within a single site will be managed by CM. Just remember that within the "source" site the data will _not_ be moved from the indexer that you will be sending it to from your forwarders. Also - if you happen to lose one indexer within a single site, the cluster will try to replicate the buckets from the other site which might end badly (you will run out of space since you'll be merging data from two indexers into a single one).
You have a couple of complex and confusing searches - using appendcols does not guarantee that the data in the row relate to each other in a meaningful way. It is difficult to see how your expected r... See more...
You have a couple of complex and confusing searches - using appendcols does not guarantee that the data in the row relate to each other in a meaningful way. It is difficult to see how your expected result can be derived from your actual result. Perhaps if you shared some anonymised sample events, it might be clearer what you are dealing with and what you are trying to achieve.
We are facing very strange issue as the objects of specific Apps reverted back to old settings even the lookup files were impacted on our SHC this issue repeated more than once since upgrading Splunk... See more...
We are facing very strange issue as the objects of specific Apps reverted back to old settings even the lookup files were impacted on our SHC this issue repeated more than once since upgrading Splunk to v 9.0.4 Can you please help on this case ?
| table id x y z k | stats list(x) as x list(y) as y first(z) AS z first(k) as k BY id | eval x=mvdedup(x) | eval y=mvdedup(y) | eval xrange=mvrange(0,mvcount(x)) | mvexpand xrange | eval name="x".(x... See more...
| table id x y z k | stats list(x) as x list(y) as y first(z) AS z first(k) as k BY id | eval x=mvdedup(x) | eval y=mvdedup(y) | eval xrange=mvrange(0,mvcount(x)) | mvexpand xrange | eval name="x".(xrange+1) | eval {name}=mvindex(x,xrange) | eval yrange=mvrange(0,mvcount(y)) | mvexpand yrange | eval name="y".(yrange+1) | eval {name}=mvindex(y,yrange) | fields - x xrange y yrange name | stats values(*) as * by id
Thanks for that, as I stated rf/sf is 1 per site so total of 2 searchable copies due to costs.  I  want to know the best way to spread the buckets of an index over as many indexers as possible to g... See more...
Thanks for that, as I stated rf/sf is 1 per site so total of 2 searchable copies due to costs.  I  want to know the best way to spread the buckets of an index over as many indexers as possible to get the best bandwidth out of the I/O sub-system. Cheers  
So rather than have 2 volumes just have 1 and use tiered storage so that you only need to monitor the usage at the storage system. Make capacity planning much easier and hardware tiering is far more... See more...
So rather than have 2 volumes just have 1 and use tiered storage so that you only need to monitor the usage at the storage system. Make capacity planning much easier and hardware tiering is far more efficient/performant as accessed data will be elevated to  a higher performance tier.   
Copy the raw event and paste into a code block </>
Which fields are missing?
Please share some anonymised sample events in a code block for both source types demonstrating the common fields you want to use to correlate the events by.
Solution: Please check if analytics is enabled from Machine agent: <APPDYNAMICS_HOME>/<Machine_Agent>/monitors/analytics-agent/monitor.xml <enabled>true</enabled> Thanks.
Hi, The code is like index=main host=server10 (EventCode=4624 OR  EventCode=4634) Logon_Type=3 NOT user="*$" NOT user "ANONYMOUS LOGON" | dedup user | where NOT MsgID==AUT22673 | eval LoginTime=_t... See more...
Hi, The code is like index=main host=server10 (EventCode=4624 OR  EventCode=4634) Logon_Type=3 NOT user="*$" NOT user "ANONYMOUS LOGON" | dedup user | where NOT MsgID==AUT22673 | eval LoginTime=_time | table user LoginTime   The output will list active RDP user.  No idea how to fix the rest of it, either 1: If number of user == 0, then print "No Remote desktop user" 2: Or put number of user into a Single Value, Radial Gauge (not username) Sounds so easy but I cannot figure out how to fix it.  Too little Splunk experience. Rgds Geir
  | makeresults | eval _raw="id;x;y;z;k a;1;;; a;;1;; a;;;1; a;2;;; a;;2;; a;;;;1 b;1;;; b;;1;; b;;;1; b;2;;; b;;2;; b;;;;1 a;;1;; a;;;1; a;2;;; a;;2;; a;;;;1 b;1;;; b;;1;; b;;;1; b;2;;; b;;2;; b;;;... See more...
  | makeresults | eval _raw="id;x;y;z;k a;1;;; a;;1;; a;;;1; a;2;;; a;;2;; a;;;;1 b;1;;; b;;1;; b;;;1; b;2;;; b;;2;; b;;;;1 a;;1;; a;;;1; a;2;;; a;;2;; a;;;;1 b;1;;; b;;1;; b;;;1; b;2;;; b;;2;; b;;;;1" | multikv forceheader=1 | fields id x y z k | table id x y z k | stats first(*) AS * BY id   I have date of the following form, where the usual variables Z and K (might) have multiple measurements that are not unique, so I use "| stats first()" to aggregate them to the ID. However there are variables X and Y that do contain multiple unique values (which might also repeat). In the end I would like to obtain a table like so:   id x1 x2 y1 y2 z k a 1 2 1 2 1 1 b 1 2 1 2 1 1   Where (ideally) the fields of X and Y are numbered for each unique value (NOT the value in the field) so that if 3 unique values are in the data it would yield X1,2,3.
@bowesmana  Can you please guide what changes I should made  Should I need to change cron schedule expression or I need to make change in my queries    Please guide
      I am appending results from below query,which will display difererent objectype suppliedMaterial: index="" source="" "suppliedMaterial" AND "reprocess event" |stats count | rename ... See more...
      I am appending results from below query,which will display difererent objectype suppliedMaterial: index="" source="" "suppliedMaterial" AND "reprocess event" |stats count | rename count as ReProcessAPICall | appendcols "" "suppliedMaterial" AND "data not found for Ids"| eval PST=_time-28800 | eval PST_TIME3=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=dataNotFoundIds path=dataNotFoundIds{}| stats values(*) as * by _raw | table dataNotFoundIds{},dataNotFoundIdsCount, PST_TIME3 | sort- PST_TIME3 ] | appendcols [search index="" source="*" "suppliedMaterial" AND "sqs sent count" | eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath sqsSentCount output=sqsSentCount | stats values(*) as * by _raw | table sqsSentCount PST_TIME4 | sort- PST_TIME4 ] | appendcols [search index="" source="" "suppliedMaterial" AND "request body" | eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=version path=eventBody.version | spath output=objectType path=eventBody.objectType | stats values(*) as * by _raw | table version, objectType ] | table objectType version dataNotFoundIdsCount sqsSentCount ReProcessAPICall For Material index="" source="" material" AND "reprocess event" |stats count | rename count as ReProcessAPICall | appendcols*" "material" AND "data not found for Ids"| eval PST=_time-28800 | eval PST_TIME3=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=dataNotFoundIds path=dataNotFoundIds{}| stats values(*) as * by _raw | table dataNotFoundIds{},dataNotFoundIdsCount, PST_TIME3 | sort- PST_TIME3 ] | appendcols [search index="" source="*" "material" AND "sqs sent count" | eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath sqsSentCount output=sqsSentCount | stats values(*) as * by _raw | table sqsSentCount PST_TIME4 | sort- PST_TIME4 ] | appendcols [search index="" source="" "material" AND "request body" | eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=version path=eventBody.version | spath output=objectType path=eventBody.objectType | stats values(*) as * by _raw | table version, objectType ] | table objectType version dataNotFoundIdsCount sqsSentCount ReProcessAPICall My actual is : objectType version dataNotFoundIdsCount sqsSentCount ReProcessApiCall suppliedMaterial all 4 15 12 suppliedMaterial latest 2 19   suppliedMaterial all 3 11   Material latest 6 10   Material latest 5 4   Material all 4 1   Material all 2 3     My Expected is : Basically I needed to count the two fields (dataNotFoundIdsCount & ssqsSentCount based on what version whether 'all' or 'latest') from the previous queries .  I am thinking to use the version as dynamic values , and bring conditional check  in those queries to add the field values for each version and name it as dataNotFoundIdsCount_all ,dataNotFoundIdsCount_latest. Finally in the last query again check the version and show the sum Please advise if there's a easy way of doing this ..  objectType version dataNotFoundIdsCount sqsSentCount ReProcessApiCall suppliedMaterial all 4 15 12 suppliedMaterial latest 2 19   Material all 3 11   Material latest 6 10