All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So there is no way I cold do it on Dashboard Studio... but I can do it on Classic because I can edit the java script?    Thank you so much for the idea!
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it... See more...
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it can be shared and updated but it is something you'd need to write yourself. And there are several possible issues with that (including of course permissions management, concurrent access and so on). So it's not that straightforward.
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. ... See more...
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. Is that possible???  I can make a lookup maybe or something, but I think I did not communicate what I wanted. 
This worked beautifully, thank you!
Understood! I appreciate your answers.  I will keep this post unresolved for now and test it.
I'll give it a shot.  Thank you for your help!
Hi @alferone , I don't think so, if you have a fixed frequency of data update. I prefer a summary index for the reasons I listed in my before answer. Ciao. Giuseppe
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were insta... See more...
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were installed in the same place (/opt/splunk). More recent versions install in separate directories (/opt/splunk vs. /opt/splunkforwarder) so it might be possible to install both from packages (I haven't tried this myself though so I'd strongly advise to test in lab first).
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of ... See more...
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I was considering that initially.  Would the search be a little taxing to pull all of the different tools and times together?
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log vol... See more...
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log volume thats getting copied is relatively small, the additional license usage is not a major issue.
Hi @alferone , why don't you use a summary index? in this way you're sure to have the last updated version, you have also the previous versions and you don't have any limitation to the number of en... See more...
Hi @alferone , why don't you use a summary index? in this way you're sure to have the last updated version, you have also the previous versions and you don't have any limitation to the number of entries. Ciao. Giuseppe
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log vol... See more...
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log volume thats getting copied is relatively small, the additional license usage is not a major issue.   It looks like scheduling search with collect is what fits the requirement we have
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can r... See more...
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can run a Universal Forwarder together with another Splunk Enterprise Component. Is this supported or is it at least officially documented somewhere?  
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environm... See more...
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environments and events still sent between them... that's a bit weird. But hey, it's your environment Actually, since you want to do some strange stuff with OS-level logs, it might be that one unique use case when it makes sense to install a UF alongside a normal Splunk Enterprise installation. That might be an easiest and least confusing solution.  
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search agains... See more...
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search against each tool's logs and then update it's "last seen" time in the KV store for the particular asset. I've attempted this a few ways, but I can't see to get it going.  I have the KV Store built with one column of last_seen times for one tool. But I am lost on how to update last_seen times for other tools for existing entries in the KV Store. Any guidance would be appreciated.  Thank you!
Different retention periods is the textbook case of splitting data between different indexes. So instead of copying you might consider simply sending some of your data to one index and the rest (the ... See more...
Different retention periods is the textbook case of splitting data between different indexes. So instead of copying you might consider simply sending some of your data to one index and the rest (the error logs) to another index. Otherwise - if you copy the events from one index to another - those events will be counted twice against your license which doesn't make much sense.
Hi @jpillai , do you want to divide logs between the two indexes or copy a part of logs (ERRORS) in the second one? in this second case you pay twice the license. Anyway, if you want to divide log... See more...
Hi @jpillai , do you want to divide logs between the two indexes or copy a part of logs (ERRORS) in the second one? in this second case you pay twice the license. Anyway, if you want to divide logs, you have to find the regex to identify logs fo index2 and put these props.conf and transforms.conf in your indexes or (if present) on Heavy Forwarders: # etc/system/local/transforms.conf [overrideindex] DEST_KEY =_MetaData:Index REGEX = . FORMAT = my_new_index #etc/system/local/props.conf [mysourcetype] TRANSFORMS-index = overrideindex the REGEX to use is the one you identified. If instead you want to copy ERROR logs in the second index, you can use the collect command, but you pay twice the license (using the original sourcetype) or you must use the stash sourcetype (in this case you don't pay twice the license. In othe words, you have to schedule a search like the following (if "ERROR" is the string to identify logs to copy): index=index1 "ERROR" | collect index=index2 My hint is to override index value for the logs that you want to index in index2. Ciao. Giuseppe  
OK. What does "doesn't work" mean here? And do you get any results from the initial tstats search? Stupid question - is your datamodel even accelerated?
If you uncheck the option, what's the number of BT you see?