@marnall I think the cleanest way, until they fix it, would be to build a Custom Function that uses REST to check the for the <thing> you want and then output a boolean to then use downstream. At...
See more...
@marnall I think the cleanest way, until they fix it, would be to build a Custom Function that uses REST to check the for the <thing> you want and then output a boolean to then use downstream. At least the CF could be made re-usable for similar use cases.
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value. ...
See more...
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value. Example of what I want to dedup: field1 field2 field3 field4 a b c d a b c e a b c f Example of what I would like to see: field1 field2 field3 field4 a b c d Any help would be greatly appreciated. Regards.
@PickleRick I changed the URL to use raw endpoint. This seems to have fixed the timestamp but Splunk is now breaking the events at the timestamp fields. I have added KV_MODE = json for thi...
See more...
@PickleRick I changed the URL to use raw endpoint. This seems to have fixed the timestamp but Splunk is now breaking the events at the timestamp fields. I have added KV_MODE = json for this sourcetype on both HF and SH but that did not fix the line breaking.
Hi @cadrija,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that...
See more...
Hi @cadrija,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
Hi @Rakzskull,
I’m a Community Moderator in the Splunk Community.
This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend t...
See more...
Hi @Rakzskull,
I’m a Community Moderator in the Splunk Community.
This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the...
See more...
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the index names to maintain ease of migration sadly, it isn't wrong just not ideal for a clean new environment.
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the...
See more...
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the HF no props are fired. The events are immediately routed to indexing pipeline completely skipping the preceeding steps. You could try to fiddle with the routing on the splunktcp input so that it gets routed to typingQueue but that can break other things.
Hello good afternoon, does anyone know how to integrate the Adabas database into Splunk and where I can download the jdbc drivers for Splunk DB Connect?
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:...
See more...
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype....
See more...
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype. And that's (depending on how much "back" you would want to go) is either much more difficult or plain impossible. See here - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The event comes through all stages of the ingestion pipeline in sequence and there is no way for an event to "go back". Even if you rewrite the sourcetype, it will still get processed further "down the street" according to the props/transforms from the original sourcetype. The only difference will be that it will be written into the index with the new sourcetype and when you search your data the search-time props for the new sourcetype will be applied. There is one "exception" - if you use CLONE_SOURCETYPE, the event will be "forked" and a new copy with the new sourcetype will be ingested again into the pipeline but it will still be after linebreaking and - as far as I remember - after timestamp parsing.
Thanks @PickleRick . HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the poin...
See more...
Thanks @PickleRick . HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the point of indexing. I understand that this is viable and a useful getaround the cooked issue. If this is viable then it minimises changes to the forwarder tier which is desirable for stability. This was one of the sources recommend to me: https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Also: https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L153C1-L154C1
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group. The expression is used to match the data in the incoming event (_raw) and cannot use k...
See more...
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group. The expression is used to match the data in the incoming event (_raw) and cannot use key specifiers such as 'source::'. Try these transforms [set_new_sourcetype]
SOURCE_KEY = MetaData:Source
REGEX = (/var/log/path/tofile.log)
FORMAT = sourcetype::new_sourcetype_with_new_timeformat
DEST_KEY = MetaData:Sourcetype
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed ...
See more...
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed again. (that can be AFAIR changed but it's tricky) 2. Since props are based on sourcetype/source/host, you can't just rewrite one index to another globally. You need to do it selectively on a - for example - per-sourcetype basis (possibly with some conditional execution) or define a wildcard-based global stanzas to conditionally rewrite destination indexes. Kinda ugly and might be troublesome to maintain.
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsea...
See more...
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsearch to generate a set of conditions directly into your search <your_base_search> [ | inputlookup your_lookup.csv | rename if needed ] | <rest_of_your_search> This might prove to be more effective if your resulting set of conditions is small and yields only a handful of events.