All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are using a SAAS based controller. If we needed to restore aspects of our configuration from yesterday, or from perhaps a week or month ago, what is the process for us to do that? Do you perform r... See more...
We are using a SAAS based controller. If we needed to restore aspects of our configuration from yesterday, or from perhaps a week or month ago, what is the process for us to do that? Do you perform regular (and granular) backups on our behalf, or are we expected to download configurations ourselves? If so, what options are there that allow us to automate this? E.g. APIs, jobs etc
i need to masking email on my data, i'm tring using transforms.com but [emailaddr-anonymizer] REGEX = ([A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,63}) FORMAT = ********@********* DEST_KEY = _raw  if I d... See more...
i need to masking email on my data, i'm tring using transforms.com but [emailaddr-anonymizer] REGEX = ([A-z0-9._%+-]+@[A-z0-9.-]+\.[A-z]{2,63}) FORMAT = ********@********* DEST_KEY = _raw  if I do this the entire log is masked, however I want only the email to be masked, please can someone help me
The way it works with installations/upgrades is that the installation package contains binaries and default settings which you should not touch. Ever. You only edit your local configs. That way if yo... See more...
The way it works with installations/upgrades is that the installation package contains binaries and default settings which you should not touch. Ever. You only edit your local configs. That way if you deploy a new software version (whether it's by means of rpm upgrade or simply untaring the archive over your old installation) it should replace the default settings but leave your installation-specific local settins intact. That's why it's always stressed - don't touch the files residing in etc/system/default (the same goes for the apps - apps come with their own default directory which you should not touch since it'll get overwritten with app upgrade and you create config settings in the local directory).
Thanks for the replies. Please note that there was some planning in doing this, that the new system had been up for a few months, and that the splunk+data was rsynced over several times and tested. ... See more...
Thanks for the replies. Please note that there was some planning in doing this, that the new system had been up for a few months, and that the splunk+data was rsynced over several times and tested. We obviously failed on the testing aspect, I have a single contact for users of this application, and relied on their reports of everything looking OK. Our system is only used weekdays, so after the Saturday change and realization that things were amiss, resulted in turning off the new server and reverting to the original server, so no unplanned production outage, system is running as it was before. I have to say (from inventsekars' reply) that in step 3, it is not immediately clear to me that I am to reinstall splunk on top of my original /opt/splunk directory rsynced over from the existing system. I would have thought that all the existing config files would be overwritten, and it would then seem to be like a new install. That is why I installed 8.2.6 splunk on the new server first, and then rsynced the existing data on top of it, to ensure any configuration files were intact.  I am opening a ticket with Splunk today, to go over the process and investigate why the data was not reachable by the application. I appreciate the suggestions, but I do not know how to "check _internal index for errors" or do "tstats". Maybe I should request my employer send me for Splunk admin training, if they are expecting my to administer it. Cheers, Michael.
You can return more than one field from the subsearch. If you have a subsearch returning a set of values field1=val1 field2=val2 field3=val3 field1=val4 field2=val5 field3=val6 field1=val7 field2... See more...
You can return more than one field from the subsearch. If you have a subsearch returning a set of values field1=val1 field2=val2 field3=val3 field1=val4 field2=val5 field3=val6 field1=val7 field2=val8 field3=val9 ... It is rendered as sets of condition in the outer search (field1=val1 AND field2=val2 AND field3=val3) OR (field1=val4 AND field2=val5 AND field3=val6) OR (field1=val7 ANR field2=val8 AND field3=val9) ... So you can filter by any set of fields.
Hi colleagues, i have faced the same issue and just curious where actually the limitation defined (for alert action to lookup )   KR, Lis
Hi I didn't find an email address from the developer Christopher Caldwell so I try it this way. The BlueCat Address Manager Restful API changes from version 1 to version 2 and version 1 will be r... See more...
Hi I didn't find an email address from the developer Christopher Caldwell so I try it this way. The BlueCat Address Manager Restful API changes from version 1 to version 2 and version 1 will be removed in 2025. Are there any plans to update the Add-on to support the new API? I would be very pleased! Greetings, Mirko
Hi gcusello  This is working fine when we have a single field in common.  If we have more than 1 field as a key between the 2 searches , then is it possible to Exclude results from search 2 based o... See more...
Hi gcusello  This is working fine when we have a single field in common.  If we have more than 1 field as a key between the 2 searches , then is it possible to Exclude results from search 2 based on the 2 fields instead of 1 ?? 
I know, it's a bit tricky It's that since Splunk does something called "schema on read" (mostly, apart from the indexed fields), it searches for data differently than, for example, your typical R... See more...
I know, it's a bit tricky It's that since Splunk does something called "schema on read" (mostly, apart from the indexed fields), it searches for data differently than, for example, your typical RDBMS does. Splunk firstly searches for the value and having found that value in a set of events it checks whether this value location fits the definition of a field. For example if you have three events like this field1=whatever field2=wherever field3=whenever field4=otherwhatever field5=otherwherever field6=otherwheneverfield7=otherwhatever site=otherwherever field8=otherwhenever Assuming you have your key=value definitions set, if you search for site=otherwherever Splunk will firstly chose the second and third event from its index (since the first one doesn't have the "otherwherever" string anywhere), then parse both events into fields and decide that only the third one matches. It can sometimes create some interesting issues in unusual cases, especially involving partial matches (and in your case "13" is indeed a partial match on "000000013").
Thanks @ITWhisperer. It work like a charm.
Try putting an extra space before Mandatory so that lexicographical sorting will prioritise above single spaced values | foreach "* Mandatory" [| rename "<<FIELD>>" as "<<MATCHSEG1>> Mandatory"]
| append [| makeresults | eval location="AM"]
Hi @mnj1809, could you share your search and the possible values for regions in text mode? Ciao. Giuseppe
Hello Splunkers, I've a Region filter over the dashboard. This Region filter has values AMER and EMEA.   I've a requirement to reorder the above fields based on the selection of Region filter ... See more...
Hello Splunkers, I've a Region filter over the dashboard. This Region filter has values AMER and EMEA.   I've a requirement to reorder the above fields based on the selection of Region filter as follows. I want "<Region> Mandatory" field to be appear before "<Region> All" Thanks in advance. @tscroggins @yuanliu @bowesmana     
OK. Thanks for your help.   index=<myindex> logid="0000000013" AND logid!="13" | stats count gives 3,183,571.
Hi @Nawab, no it isn't possible: in Splunk data are replicated between all the indexers (based on Replication Factor and Search Factor) and all the Indexers partecipate to searches. You can choose ... See more...
Hi @Nawab, no it isn't possible: in Splunk data are replicated between all the indexers (based on Replication Factor and Search Factor) and all the Indexers partecipate to searches. You can choose the number of copies of replicated raw data (Replication Factor) that use around 15% or the original data and the number of copies of idxs (Replication Factor) that use around 35% or the original data. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Aboutclusters. Ciao. Giuseppe
Hello Community, We have a challenge with our SysMon Instance. While testing compatibilities we noticed that after SysMon gets upgraded it no longer talks to the SIEM for some weird reason.  Has a... See more...
Hello Community, We have a challenge with our SysMon Instance. While testing compatibilities we noticed that after SysMon gets upgraded it no longer talks to the SIEM for some weird reason.  Has anyone experienced anything like this before? Regards, Dan
Hi @Real_captain , sorry, if you want to exclude results from search 2 you have to use the NOT operator: `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 NOT [ search (index=events_prod_... See more...
Hi @Real_captain , sorry, if you want to exclude results from search 2 you have to use the NOT operator: `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 NOT [ search (index=events_prod_srt_shareholders_esa OR index=eoc_srt) seev.047 Name="Created Disclosure Response Status Advice Accepted" | fields messageBusinessIdentifier ] | table timestampOfReception, messageOriginIdentifier, messageType, status, messageBusinessIdentifier, originPlatform, direction, sourcePlatform, currentPlatform, targetPlatform, senderIdentifier, receiverIdentifier, currentPlatform Ciao. Giuseppe  
Ok, in reality this logid is not a numeric field, it's a string, but some unknown reason splunk convert it to number. Maybe this is bug and you should create a support case. What happen if you try t... See more...
Ok, in reality this logid is not a numeric field, it's a string, but some unknown reason splunk convert it to number. Maybe this is bug and you should create a support case. What happen if you try this index=<myindex> logid="0000000013" AND logid!="13" | stats count If this didn't help, I don't know how to tell in search to splunk that this field should keep as string instead of convert it to numeric.
Providing the source of your dashboard (in a code block </>) would be useful, as would a sample of your lookup (anonymised appropriately).