All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. You can't "forward" already existing data. You need to search the indexes, create a "dump" of the results and push them to the other solution. For the continuously incoming data you can either u... See more...
OK. You can't "forward" already existing data. You need to search the indexes, create a "dump" of the results and push them to the other solution. For the continuously incoming data you can either use syslog output on your indexers or a s2s output to an external component (either a HF or a third-party solution which can talk s2s). The caveat here is that it introduces additional complexity and possible points of failure on your indexers. If your architecture had a separate HF layer before indexers through which all input streams went you could do that on that HF layer instead of indexers. General forwarding to an external system solution is tricky to do well. You might want to engage PS or your local Splunk Partner.
Nope. That was the issue that I had. If yours is caused  by something else - you might need to raise a support ticket.
How much data you have? Only one/some indexes or all?
If I recall right SHC shouldn't replicate those files in etc/system/local . Those are host specific local files by default. Are you absolutely sure that your host is defined in inputs.conf file und... See more...
If I recall right SHC shouldn't replicate those files in etc/system/local . Those are host specific local files by default. Are you absolutely sure that your host is defined in inputs.conf file under system/local instead of inside some app? Can you check it from CLI with command "splunk btool inputs list --debug | egrep host"? Unfortunately this gives a lot entries, but you can see if there is also 'etc/system/local' on list.
Hi @PickleRick, Do you any clue for fix with any other possible way workaround ?
Thank you for your response. We will need to forward both old and new data, and the process of forwarding will be continuous.
Do you need forward a new data only or also already indexed data? If new one then you can e.g. use own filebeat etc. to collect same input or configure your current UF to forward two places. If th... See more...
Do you need forward a new data only or also already indexed data? If new one then you can e.g. use own filebeat etc. to collect same input or configure your current UF to forward two places. If the need contains already indexed data then based on amount of it and is this a one shot or continuous need make some guidance how to do it.
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two index... See more...
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two indexers, one search head, and one deployment server. Could anyone share the best practices for achieving this? I’d appreciate any guidance or recommendations to ensure a smooth and efficient setup. Thanks in advance!
Thank you @sainag_splunk  I do understand the "how" so to speak, but I do not understand the "why". If I validate a bundle and then push that bundle I would assume that the checksum for the validate... See more...
Thank you @sainag_splunk  I do understand the "how" so to speak, but I do not understand the "why". If I validate a bundle and then push that bundle I would assume that the checksum for the validated bundle after a push should be the same as the new and current one. Otherwise I cannot validate that the last validated bundle is exactly what has been pushed and applied. But maybe I'm missing the point? If the checksums are not there to validate changes but rather just to track the presence of unapplied changes, then its fine. I suspect that at this point the logical follow-up question is, how are bundles modified after validation but before being applied? Or are the checksums only for the changed files/settings and the final cheksum is for the entire bundle? All the best  
Thanks for response. I will try it out.
Yes. I'm just pointing it out because it's a common use case - to find something that is (not) followed by another something and it's a bit unintuitive that Splunk by default returns results in rever... See more...
Yes. I'm just pointing it out because it's a common use case - to find something that is (not) followed by another something and it's a bit unintuitive that Splunk by default returns results in reverse chronological order. So you sometimes need to either manipulate the order of results so that "previous" in terms of carrying over values further down the stream means the desired way. Or you have to remember that you're returning the end of some interesting period, not its beginning. As I said - depending on the use case it can be sometimes confusing and it's worth remembering to always double check your results order when you're doing similar things.
Technically you can work with regexes defined in lookups by doing something like this | eval enabled=1 | lookup regex_list.csv enabled OUTPUT regex | eval match=mvmap(regex, if(match(path, regex), r... See more...
Technically you can work with regexes defined in lookups by doing something like this | eval enabled=1 | lookup regex_list.csv enabled OUTPUT regex | eval match=mvmap(regex, if(match(path, regex), regex, null())) where your csv contains 2 columns, the regex and a column called enabled with a value of 1. This will pull ALL regexes into each event and then using mvmap will map the path against each of the regexes individually - for each match it will add the matching regex to the match field. After the mvmap, you will have a potentially multivalue field 'match' with one or more matches. If match is null, then there were no matches, so | where isnotnull(match) will filter out non matching paths. This is not using a lookup as a lookup, but simply using the lookup as a repository of matches which you "load" to each event during the pipeline. Depending on how many regexes you have it may be an option or not.    
Yes, this one works because the connected AFTER the disconnected does not happen, resulting in the count=1 for a disconnect - normally you'd get them in reverse and in this case, that would be the or... See more...
Yes, this one works because the connected AFTER the disconnected does not happen, resulting in the count=1 for a disconnect - normally you'd get them in reverse and in this case, that would be the order needed. It rather trivialises the example, but without knowing the data, it's hard to know if it would work in all cases.
can anyone help me on it please
Thank you for the details, this will help me with current dashboard
If you mean Splunk Enterprise trial, you can configure TLS on any component. If you mean Splunk Cloud trial - no. The inputs are not encrypted and the webui uses self-signed certs as far as I remember.
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? ... See more...
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? Are there more efficient ways to detect suspicious command execution without relying solely on correlation searches? Your guidance on streamlining this process would be greatly appreciated.
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example d... See more...
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example data was in chronological order while the defaul result sorting is opposite. So it's all a bit confusing.
Hello @sainag_splunk, Thanks for sharing the information. Yes, I am currently using Splunk Cloud Trial version for my POC work. Thanks.
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expe... See more...
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expedite the issue.       If this Helps, Please Upvote.