All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  I have a dataset with very poor qulity and multiple encoding error. Some fields contain data like "Алексей" which sould be "Алексей". My first idea to ... See more...
Hi,  I have a dataset with very poor qulity and multiple encoding error. Some fields contain data like "Алексей" which sould be "Алексей". My first idea to convert taht, was to search every falty dataset and convert this extermally with a script but I'm curious if theres a better way using splunk. But I have no idea how to get there. I somehow need to cet every &#(\d{4}); and I could facilitate printf("%c", \1) to get the correct unicode character but I have no Idea how to apply that to every occourance in a single field. Currently I have data like this: id name 1 Алексей   Where I wanno get is that: id name correct_name 1 Алексей Алексей   Any ideas if that is possible without using python sripts in splunk? Regards Thorsten
Hello, I am looking for any guidance, info about the possibility of using Microsoft AMA agents to forward logs to splunk instead of using Splunk universal forwarders. I know you will say but why?! ... See more...
Hello, I am looking for any guidance, info about the possibility of using Microsoft AMA agents to forward logs to splunk instead of using Splunk universal forwarders. I know you will say but why?! lets say I have some requirements and constraints that oblige me to use AMA agents  I need to know the feasibality of this integration and if there are any known issues or limitations. Thanks you for your help. (excuse me if my question is vague, i am kinda lost here  )
Hi  Can you please tell me how can i  extract the events for which the difference of current_time and timestampOfReception is greater that 4 hours for the below Splunk query :    `eoc_stp_event... See more...
Hi  Can you please tell me how can i  extract the events for which the difference of current_time and timestampOfReception is greater that 4 hours for the below Splunk query :    `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 (messageType= seev.047* OR messageType= SEEV.047*) status = SUCCESS targetPlatform = SRS_ESES NOT [ search (index=events_prod_srt_shareholders_esa OR index=eoc_srt) seev.047 Name="Received Disclosure Response Command" | spath input=Properties.appHdr | rename bizMsgIdr as messageBusinessIdentifier | fields messageBusinessIdentifier ] | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval diff= Current_time-timestampOfReception | fillnull timestampOfReception , messageOriginIdentifier, messageBusinessIdentifier, direction, messageType, currentPlatform, sAAUserReference value="-" | sort -timestampOfReception | table diff , Current_time, timestampOfReception, messageOriginIdentifier, messageType, status, messageBusinessIdentifier, originPlatform, direction, sourcePlatform, currentPlatform, targetPlatform, senderIdentifier, receiverIdentifier, currentPlatform, | rename timestampOfReception AS "Timestamp of reception", originPlatform AS "Origin platform", sourcePlatform AS "Source platform", targetPlatform AS "Target platform", senderIdentifier AS "Sender identifier", receiverIdentifier AS "Receiver identifier", messageOriginIdentifier AS "Origin identifier", messageBusinessIdentifier AS "Business identifier", direction AS Direction, currentPlatform AS "Current platform", sAAUserReference AS "SAA user reference", messageType AS "Message type"
Maybe.  All knowledge objects in the ESCU app will be disabled so any app (including ES) that tries to use them likely will fail.
The easier way to mask data is with SEDCMD in props.conf. SEDCMD-emailaddr-anonymizer = s/([A-z0-9\._%+-]+@[A-z0-9\.-]+\.[A-z]{2,63})/********@*********/g  
The error message is complaining about the trailing | in the subsearches. [|inputlookup internal_ranges.csv |]) AND (All_Traffic.dest_ip [|inputlookup internal_ranges.csv |]  
We are using a SAAS based controller. If we needed to restore aspects of our configuration from yesterday, or from perhaps a week or month ago, what is the process for us to do that? Do you perform r... See more...
We are using a SAAS based controller. If we needed to restore aspects of our configuration from yesterday, or from perhaps a week or month ago, what is the process for us to do that? Do you perform regular (and granular) backups on our behalf, or are we expected to download configurations ourselves? If so, what options are there that allow us to automate this? E.g. APIs, jobs etc
i need to masking email on my data, i'm tring using transforms.com but [emailaddr-anonymizer] REGEX = ([A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,63}) FORMAT = ********@********* DEST_KEY = _raw  if I d... See more...
i need to masking email on my data, i'm tring using transforms.com but [emailaddr-anonymizer] REGEX = ([A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,63}) FORMAT = ********@********* DEST_KEY = _raw  if I do this the entire log is masked, however I want only the email to be masked, please can someone help me
The way it works with installations/upgrades is that the installation package contains binaries and default settings which you should not touch. Ever. You only edit your local configs. That way if yo... See more...
The way it works with installations/upgrades is that the installation package contains binaries and default settings which you should not touch. Ever. You only edit your local configs. That way if you deploy a new software version (whether it's by means of rpm upgrade or simply untaring the archive over your old installation) it should replace the default settings but leave your installation-specific local settins intact. That's why it's always stressed - don't touch the files residing in etc/system/default (the same goes for the apps - apps come with their own default directory which you should not touch since it'll get overwritten with app upgrade and you create config settings in the local directory).
Thanks for the replies. Please note that there was some planning in doing this, that the new system had been up for a few months, and that the splunk+data was rsynced over several times and tested. ... See more...
Thanks for the replies. Please note that there was some planning in doing this, that the new system had been up for a few months, and that the splunk+data was rsynced over several times and tested. We obviously failed on the testing aspect, I have a single contact for users of this application, and relied on their reports of everything looking OK. Our system is only used weekdays, so after the Saturday change and realization that things were amiss, resulted in turning off the new server and reverting to the original server, so no unplanned production outage, system is running as it was before. I have to say (from inventsekars' reply) that in step 3, it is not immediately clear to me that I am to reinstall splunk on top of my original /opt/splunk directory rsynced over from the existing system. I would have thought that all the existing config files would be overwritten, and it would then seem to be like a new install. That is why I installed 8.2.6 splunk on the new server first, and then rsynced the existing data on top of it, to ensure any configuration files were intact.  I am opening a ticket with Splunk today, to go over the process and investigate why the data was not reachable by the application. I appreciate the suggestions, but I do not know how to "check _internal index for errors" or do "tstats". Maybe I should request my employer send me for Splunk admin training, if they are expecting my to administer it. Cheers, Michael.
You can return more than one field from the subsearch. If you have a subsearch returning a set of values field1=val1 field2=val2 field3=val3 field1=val4 field2=val5 field3=val6 field1=val7 field2... See more...
You can return more than one field from the subsearch. If you have a subsearch returning a set of values field1=val1 field2=val2 field3=val3 field1=val4 field2=val5 field3=val6 field1=val7 field2=val8 field3=val9 ... It is rendered as sets of condition in the outer search (field1=val1 AND field2=val2 AND field3=val3) OR (field1=val4 AND field2=val5 AND field3=val6) OR (field1=val7 ANR field2=val8 AND field3=val9) ... So you can filter by any set of fields.
Hi colleagues, i have faced the same issue and just curious where actually the limitation defined (for alert action to lookup )   KR, Lis
Hi I didn't find an email address from the developer Christopher Caldwell so I try it this way. The BlueCat Address Manager Restful API changes from version 1 to version 2 and version 1 will be r... See more...
Hi I didn't find an email address from the developer Christopher Caldwell so I try it this way. The BlueCat Address Manager Restful API changes from version 1 to version 2 and version 1 will be removed in 2025. Are there any plans to update the Add-on to support the new API? I would be very pleased! Greetings, Mirko
Hi gcusello  This is working fine when we have a single field in common.  If we have more than 1 field as a key between the 2 searches , then is it possible to Exclude results from search 2 based o... See more...
Hi gcusello  This is working fine when we have a single field in common.  If we have more than 1 field as a key between the 2 searches , then is it possible to Exclude results from search 2 based on the 2 fields instead of 1 ?? 
I know, it's a bit tricky It's that since Splunk does something called "schema on read" (mostly, apart from the indexed fields), it searches for data differently than, for example, your typical R... See more...
I know, it's a bit tricky It's that since Splunk does something called "schema on read" (mostly, apart from the indexed fields), it searches for data differently than, for example, your typical RDBMS does. Splunk firstly searches for the value and having found that value in a set of events it checks whether this value location fits the definition of a field. For example if you have three events like this field1=whatever field2=wherever field3=whenever field4=otherwhatever field5=otherwherever field6=otherwheneverfield7=otherwhatever site=otherwherever field8=otherwhenever Assuming you have your key=value definitions set, if you search for site=otherwherever Splunk will firstly chose the second and third event from its index (since the first one doesn't have the "otherwherever" string anywhere), then parse both events into fields and decide that only the third one matches. It can sometimes create some interesting issues in unusual cases, especially involving partial matches (and in your case "13" is indeed a partial match on "000000013").
Thanks @ITWhisperer. It work like a charm.
Try putting an extra space before Mandatory so that lexicographical sorting will prioritise above single spaced values | foreach "* Mandatory" [| rename "<<FIELD>>" as "<<MATCHSEG1>> Mandatory"]
| append [| makeresults | eval location="AM"]
Hi @mnj1809, could you share your search and the possible values for regions in text mode? Ciao. Giuseppe
Hello Splunkers, I've a Region filter over the dashboard. This Region filter has values AMER and EMEA.   I've a requirement to reorder the above fields based on the selection of Region filter ... See more...
Hello Splunkers, I've a Region filter over the dashboard. This Region filter has values AMER and EMEA.   I've a requirement to reorder the above fields based on the selection of Region filter as follows. I want "<Region> Mandatory" field to be appear before "<Region> All" Thanks in advance. @tscroggins @yuanliu @bowesmana