All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Rakzskull, I’m a Community Moderator in the Splunk Community. This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend t... See more...
Hi @Rakzskull, I’m a Community Moderator in the Splunk Community. This question was posted 6 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the... See more...
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the index names to maintain ease of migration sadly, it isn't wrong just not ideal for a clean new environment.
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the... See more...
UF is sending the data as just cooked. HF is sending it as cooked and parsed. The issue is not with INDEX_EVAL. The thing is that on the indexer tier - as the data has already been processed by the HF no props are fired. The events are immediately routed to indexing pipeline completely skipping the preceeding steps. You could try to fiddle with the routing on the splunktcp input so that it gets routed to typingQueue but that can break other things.
Hello good afternoon, does anyone know how to integrate the Adabas database into Splunk and where I can download the jdbc drivers for Splunk DB Connect?
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:... See more...
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:    
It might also be the issue with badly/not set EVENT_BREAKER (which is not the same as LINE_BREAKER). Moving the discussion to Getting Data In.
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype.... See more...
OK. Sourcetype rewriting is one thing but I have a hunch that you're trying to not only get a particular metadata field rewritten but would also like to reprocess whole event with the new sourcetype. And that's (depending on how much "back" you would want to go) is either much more difficult or plain impossible. See here - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The event comes through all stages of the ingestion pipeline in sequence and there is no way for an event to "go back". Even if you rewrite the sourcetype, it will still get processed further "down the street" according to the props/transforms from the original sourcetype. The only difference will be that it will be written into the index with the new sourcetype and when you search your data the search-time props for the new sourcetype will be applied. There is one "exception" - if you use CLONE_SOURCETYPE, the event will be "forked" and a new copy with the new sourcetype will be ingested again into the pipeline but it will still be after linebreaking and - as far as I remember - after timestamp parsing.
Thanks @PickleRick .  HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the poin... See more...
Thanks @PickleRick .  HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the point of indexing. I understand that this is viable and a useful getaround the cooked issue. If this is viable then it minimises changes to the forwarder tier which is desirable for stability. This was one of the sources recommend to me: https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Also:  https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L153C1-L154C1  
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group.  The expression is used to match the data in the incoming event (_raw) and cannot use k... See more...
The value of the REGEX attribute must be a valid regular expression that contains at least one capturing group.  The expression is used to match the data in the incoming event (_raw) and cannot use key specifiers such as 'source::'.  Try these transforms [set_new_sourcetype] SOURCE_KEY = MetaData:Source REGEX = (/var/log/path/tofile.log) FORMAT = sourcetype::new_sourcetype_with_new_timeformat DEST_KEY = MetaData:Sourcetype
Thanks Everyone for fast response!
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed ... See more...
There are two possible issues. 1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed again. (that can be AFAIR changed but it's tricky) 2. Since props are based on sourcetype/source/host, you can't just rewrite one index to another globally. You need to do it selectively on a - for example - per-sourcetype basis (possibly with some conditional execution) or define a wildcard-based global stanzas to conditionally rewrite destination indexes. Kinda ugly and might be troublesome to maintain.
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsea... See more...
With a relatively dense search the approach shown by @yuanliu is the most typical thing to do. But if you expect that the search will be sparse, you might want to use the lookup by means of a subsearch to generate a set of conditions directly into your search <your_base_search> [ | inputlookup your_lookup.csv | rename if needed ] | <rest_of_your_search> This might prove to be more effective if your resulting set of conditions is small and yields only a handful of events.
Hi @gcusello  I was able to find out those in automatic lookups and deleted ,the errors are fixed now . Thanks.!
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and... See more...
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and then Cast it into my POJO Class and then run the getter on it. How this can be Achieved.  My Code Snippet Below.  ClasssName: com.mj.common.mjServiceExecute Method: execute(com.mj.mjapi.mjmessage) public abstract class mjServiceExecute implements mjServiceExecuteintf { public mjmessage execute(mjmessage paramMjMessage) { mjmessage mjmjmesg = null; try { Object[] arrayOfObject = (Object[])paramMjMessage.getPayload(); MjHeaderVO mjhdrvo = (MjHeaderVO)arrayOfObject[0]; String str1 = mjhdrvo.getName(); } }  I want to extract the value of str1 to split the business transaction. Requesting your assistance.
I am running into this issue as well. Following for more information. 
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand th... See more...
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand the imperfections of dual-forwarding and possible data loss etc.) They need to rename the destination indexes in the new environment dropping a prefix we can call 'ABC', I believe the easiest way is to approach this via INGEST_EVAL on the new Indexes. There are approx 20x indexes to rename example: ABC_linux ABC_cisco     transforms.conf (located on the NEW Indexers) [index_remap_A] INGEST_EVAL = index="value"     I have read the spec file in transforms.conf for 9.3.1 and a 2020 .conf presentation but I am unable to find great examples. Has anyone taken this approach? as it is only a low volume of remaps it may be best to statically approach this.
@sivaranjiniG Please let us know the solution as we are facing the same task.
Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source... See more...
Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source/v2   At the bottom of the pulled data are URL for the next url:   "next_url" : "/api/source/v2?last=5431"   How do I configure TA for iterates through all the pages? I checked from link below, but i dont' understand how (or if is possible) pass the variable  from modular input to my endpoint like this or in other way:   https://mywebpage.com/api/source/v2?last=${next_url}   https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/ConfigureDataCollection#Pass_values_from_data_input_parameters  Any ideas? Thanks!
It is usually easier for us to help you when you shows us what events you are working with, but in lieu of that, assuming you have events with the following fields: Work_Month_week, "work day of week... See more...
It is usually easier for us to help you when you shows us what events you are working with, but in lieu of that, assuming you have events with the following fields: Work_Month_week, "work day of week", "Number of work hours", you could try something like this | table Work_Month_week, "work day of week", "Number of work hours" | eventstats count as total_week_day sum("Number of work hours") as Week_total by Work_Month_week | eval "percent work hours"=100*'Number of work hours'/Week_total
The first thing you need is an understanding of your data. It is your data. We do not have access to it and do not know what data you have, so it is difficult for us to determine what information you... See more...
The first thing you need is an understanding of your data. It is your data. We do not have access to it and do not know what data you have, so it is difficult for us to determine what information you might be able to extract from it. Secondly, risk is subjective. What do you deem to be high risk? What evidence do you have in your logs (that are now in Splunk) that might help you determine if something is "risky"? Such open questions as you have posed, only lead to more questions.