All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Has anyone tried this against a kvstore collection?  Seems to break in that case, when you have mv fields
After some research I could verify the I need to make an indexed Lookup, so the fields will be indexes together with the data.
If you don't observe performance degradation, you needn't worry about it.
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf#Timestamp_extraction_configuration You have the TZ parameter which you can use to "bind" a predefined timezone to a particular sou... See more...
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf#Timestamp_extraction_configuration You have the TZ parameter which you can use to "bind" a predefined timezone to a particular sourcetype, source or host. But it's always best if the source specifies the timezone within the timestamp itself - it saves you some work and possibly much grief later.
But what do you mean by asset? What in your data tells you that this is one "asset" and this is another one? Is it the host field or some other field within your data? Or any combination of fields?
Sorry, should have mentioned what was pretty obvious to me, the rest command you should have run on the MC - properly configured MC should have access to all your components. But you should have call... See more...
Sorry, should have mentioned what was pretty obvious to me, the rest command you should have run on the MC - properly configured MC should have access to all your components. But you should have called the rest call from MC _against_ your CM. But still if you're stuck in that "no candidates" state, I'd suggest opening a support case.
Access to an index is all or none.  Splunk does not have a means for selective access to data within an index.  In fact, one of the criteria for creating a new index is different security needs.  IOW... See more...
Access to an index is all or none.  Splunk does not have a means for selective access to data within an index.  In fact, one of the criteria for creating a new index is different security needs.  IOW, each customer's data should be in its own index(es). You can try defining a search filter (customer=foo, perhaps) for the end user, but that will apply to all indexes and so may not be a workable solution.
@VatsalJagani  Many thanks for the response. The SPL/query seems to be picking up the data slightly different. What i'm getting is today colunm volume is friday data and yesterday count also coming ... See more...
@VatsalJagani  Many thanks for the response. The SPL/query seems to be picking up the data slightly different. What i'm getting is today colunm volume is friday data and yesterday count also coming from friday data. can you help pls?
What do you have so far?  Do you have the two searches you want to combine?
@selvam_sekar - You can filter the data beforehand and then do what you would do otherwise. Something like this with streamstats command: basesearch earliest=-4d@d latest=now | bin span=1d _time |... See more...
@selvam_sekar - You can filter the data beforehand and then do what you would do otherwise. Something like this with streamstats command: basesearch earliest=-4d@d latest=now | bin span=1d _time | stats count by Name, _time, date_wday | search NOT date_wday="saturday" OR date_wday="sunday" | streamstats current=f window=1 last(count) as Yesterday by Name | rename count as Today | stats first(*) as * by Name | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Today Yesterday percentage_variance   I hope this helps!!! Kindly upvote if it does!!!  
Hi @VatsalJagani  The scenario is here, the similar panel and query used for both dashboards, with the only difference being the location. I hope you understand my query.
Modification have been made on the CM in etc/system/local/server.conf Added in etc/system/local/server.conf (+restart of the CM) : "contrain_singlesite_buckets = false" No change. Performed... See more...
Modification have been made on the CM in etc/system/local/server.conf Added in etc/system/local/server.conf (+restart of the CM) : "contrain_singlesite_buckets = false" No change. Performed another rolling restart. No change Still have job in pending who are impossible to resyncronize. Any suggestion in order to find where comes from the problem ? Many thanks
Thanks for the return. The rest command is not working for me... I have errors message in the CM search on all my peers indexers like : [Peer1,Peer2,....] HTTP status not OK, Code=503, Service Unav... See more...
Thanks for the return. The rest command is not working for me... I have errors message in the CM search on all my peers indexers like : [Peer1,Peer2,....] HTTP status not OK, Code=503, Service Unavailable The web part is stopped on all my indexers... any other way to have this info ? Thanks
@parthiban - You can all panels into single dashboard and hide/show panels based on which dropdown option is selected.   Here is reference - https://lantern.splunk.com/Splunk_Platform/Product_Tips/... See more...
@parthiban - You can all panels into single dashboard and hide/show panels based on which dropdown option is selected.   Here is reference - https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Hiding_rows_or_panels_in_dashboards_with_XML   I hope this helps!!!
Hi @VatsalJagani  Thank you for your response. I have a set of dashboards for inContact and Genesys, and I want to combine them into a single dashboard. InContact has a separate set of locations, ... See more...
Hi @VatsalJagani  Thank you for your response. I have a set of dashboards for inContact and Genesys, and I want to combine them into a single dashboard. InContact has a separate set of locations, and Genesys has its own distinct locations. I plan to group the locations and store them in a single variable, such as "inContact" and "Genesys," and then add filters for inContact and Genesys.  I have shared a Genesys dashboard for your reference. Prior to the country filter, I intend to incorporate an application filter (Genesys or InContact).  
HA/FO shouldn't affect the URL for API for Splunk. Base URL should be constructed as https://<host/ip>:8089   
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourc... See more...
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourcetype. 
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync loggi... See more...
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync logging product. Problem statement: There are multiple log files on multiple Linux boxes getting generated every second. I need to search for ids created and relevant creation timestamps and the batches under which these ids exists. Filter the ids based on passed batches (this is another line in the same log file) Calculate the E2E timestamp for the id processing by searching the processed id in step-1 and substracting the timestamp of step 3 and step 1(this is again printed in the log files). I have been doing this using oracle external tables and Linux shell but need to do it in a better way using Splunk and need help,  opinion's are highly appreciated   
Hi, I have below SPL, which return todays count vs yesterday count and difference between them. I want to see, if i run this search on monday, then the "yesterday" should be last Friday data instea... See more...
Hi, I have below SPL, which return todays count vs yesterday count and difference between them. I want to see, if i run this search on monday, then the "yesterday" should be last Friday data instead of weekend. could you pls help ? SPL: base search earliest=@d latest=now | append [ base search earliest=-1d@d latest=-1d ] | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by Name, Day | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Today Yesterday percentage_variance
Name does return a value, as does every other attribute you listed. How is name not valid? Isn't it just pulling from properties in AD?