Hi @cadrija,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that...
See more...
Hi @cadrija,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
Hi @Rakzskull,
I’m a Community Moderator in the Splunk Community.
This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend t...
See more...
Hi @Rakzskull,
I’m a Community Moderator in the Splunk Community.
This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the...
See more...
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the index names to maintain ease of migration sadly, it isn't wrong just not ideal for a clean new environment.
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the...
See more...
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the HF no props are fired. The events are immediately routed to indexing pipeline completely skipping the preceeding steps. You could try to fiddle with the routing on the splunktcp input so that it gets routed to typingQueue but that can break other things.
Hello good afternoon, does anyone know how to integrate the Adabas database into Splunk and where I can download the jdbc drivers for Splunk DB Connect?
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:...
See more...
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype....
See more...
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype. And that's (depending on how much "back" you would want to go) is either much more difficult or plain impossible. See here - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The event comes through all stages of the ingestion pipeline in sequence and there is no way for an event to "go back". Even if you rewrite the sourcetype, it will still get processed further "down the street" according to the props/transforms from the original sourcetype. The only difference will be that it will be written into the index with the new sourcetype and when you search your data the search-time props for the new sourcetype will be applied. There is one "exception" - if you use CLONE_SOURCETYPE, the event will be "forked" and a new copy with the new sourcetype will be ingested again into the pipeline but it will still be after linebreaking and - as far as I remember - after timestamp parsing.
Thanks @PickleRick . HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the poin...
See more...
Thanks @PickleRick . HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the point of indexing. I understand that this is viable and a useful getaround the cooked issue. If this is viable then it minimises changes to the forwarder tier which is desirable for stability. This was one of the sources recommend to me: https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Also: https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L153C1-L154C1
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group. The expression is used to match the data in the incoming event (_raw) and cannot use k...
See more...
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group. The expression is used to match the data in the incoming event (_raw) and cannot use key specifiers such as 'source::'. Try these transforms [set_new_sourcetype]
SOURCE_KEY = MetaData:Source
REGEX = (/var/log/path/tofile.log)
FORMAT = sourcetype::new_sourcetype_with_new_timeformat
DEST_KEY = MetaData:Sourcetype
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed ...
See more...
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed again. (that can be AFAIR changed but it's tricky) 2. Since props are based on sourcetype/source/host, you can't just rewrite one index to another globally. You need to do it selectively on a - for example - per-sourcetype basis (possibly with some conditional execution) or define a wildcard-based global stanzas to conditionally rewrite destination indexes. Kinda ugly and might be troublesome to maintain.
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsea...
See more...
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsearch to generate a set of conditions directly into your search <your_base_search> [ | inputlookup your_lookup.csv | rename if needed ] | <rest_of_your_search> This might prove to be more effective if your resulting set of conditions is small and yields only a handful of events.
Dear all, Requesting your support in achieving the below. I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and...
See more...
Dear all, Requesting your support in achieving the below. I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and then Cast it into my POJO Class and then run the getter on it. How this can be Achieved. My Code Snippet Below. ClasssName: com.mj.common.mjServiceExecute Method: execute(com.mj.mjapi.mjmessage) public abstract class mjServiceExecute implements mjServiceExecuteintf { public mjmessage execute(mjmessage paramMjMessage) { mjmessage mjmjmesg = null; try { Object[] arrayOfObject = (Object[])paramMjMessage.getPayload(); MjHeaderVO mjhdrvo = (MjHeaderVO)arrayOfObject[0]; String str1 = mjhdrvo.getName(); } } I want to extract the value of str1 to split the business transaction. Requesting your assistance.
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand th...
See more...
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand the imperfections of dual-forwarding and possible data loss etc.) They need to rename the destination indexes in the new environment dropping a prefix we can call 'ABC', I believe the easiest way is to approach this via INGEST_EVAL on the new Indexes. There are approx 20x indexes to rename example: ABC_linux ABC_cisco transforms.conf (located on the NEW Indexers)
[index_remap_A]
INGEST_EVAL = index="value" I have read the spec file in transforms.conf for 9.3.1 and a 2020 .conf presentation but I am unable to find great examples. Has anyone taken this approach? as it is only a low volume of remaps it may be best to statically approach this.