All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you increment the checkpoint number by one before saving it using the helper functions in the add-on builder? This should prevent it from getting the last event multiple times when there are no n... See more...
Can you increment the checkpoint number by one before saving it using the helper functions in the add-on builder? This should prevent it from getting the last event multiple times when there are no new events after the last checkpoint.
I'm just telling you that it helps. I admit I got through the exam back when it was in beta phase without much practical experience using ES (with only some at administering it) and before I did my E... See more...
I'm just telling you that it helps. I admit I got through the exam back when it was in beta phase without much practical experience using ES (with only some at administering it) and before I did my ES trainings. But I have loads of experience with different security-related solutions and loads of experience with the core Splunk so I had a serious advantage here. You can of course try and see. After all, an exam, should you attempt it without taking the course, is cheaper than thr course isn't it? It's simply impossible to: 1) Assess your proficiency not knowing you and not having worked with you 2) Give you more precise pointers without violating the NDA. The certification flowchart lists the recommended trainings. You can check their outlines and decide for yourself whether you feel knowledgeable enough without taking the training to attempt the exam. The trainings are usually useful because typically when you're working with something on your own you might be simply not using some parts of the functionality or even not be aware of it because you've never needed that. So this certification as well as some certs in the Splunk Core track (up to and including Admin and (Advanced?) Power User) can be attempted at your discretion - without any required trainings. I don't think anyone can give you more precise info since - as I said - exam contents are covered with NDA. After all you're supposed to have knowledge to answer the question, not learn the answers mindlessly.
I have never try this, but basically it should work all other internal logs except _audit.  Try to use [splunkd] as a sourcetype or [source::…/var/log/splunk/splunkd*] based on which event you try t... See more...
I have never try this, but basically it should work all other internal logs except _audit.  Try to use [splunkd] as a sourcetype or [source::…/var/log/splunk/splunkd*] based on which event you try to mask. You should remember that source definitions override sourcetype definitions. BUT if you do this and you have any issues with splunk this probably gives a reason for splunk to denying full support to you before you remove that configuration.
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in inte... See more...
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in internal logs.   I'm able to mask the value at search time with this SPL. index=_internal action=webhook | rex field=url mode=sed "s/https?:\/\/www.domin.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g" | table url I tried to port this configuration to /opt/splunk/etc/system/local/ by creating a props.conf with the following. [sourcetype::_internal] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g AND [splunkd] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g Doesn't work. This is a standalone instance of Splunk running on a ec2 instance.  So my question is, is it even possible to filter splunk generated logs?  Should I funnel these to transforms.conf and do it there?  Is that possible? Any help or insight would be greatly appreciated
chicken / egg thing no? how to get real hands on experience as analyst without doing the training. I have the demo installed/ free version and did some of the tutorial ride along activities but not r... See more...
chicken / egg thing no? how to get real hands on experience as analyst without doing the training. I have the demo installed/ free version and did some of the tutorial ride along activities but not real world stuff, that seems a bit unrealistic. Are you suggesting to hold off on the exam until getting to that level of hands on work?
Well... Let's put it this way - as far as I remember none of the trainings are formally required. It helps - either this training or hands-on experience working with ES (and I mean experience; not ju... See more...
Well... Let's put it this way - as far as I remember none of the trainings are formally required. It helps - either this training or hands-on experience working with ES (and I mean experience; not just spinning up a lab environment if you by any chance have access to NFR license as a partner or something like that - a real work experience).
thank you, I did all the trainings in the curriculum path for this test with the exception of Using Splunk Enterprise Security as that one has a 1500 fee while all others are free!. Is that course ab... See more...
thank you, I did all the trainings in the curriculum path for this test with the exception of Using Splunk Enterprise Security as that one has a 1500 fee while all others are free!. Is that course absolutely required to be proficient enough to pass this exam? 
I'm having the same issues, any resolution?
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mea... See more...
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mean it has no device or user associated with it? or its missing some fields. my main event key includes the whole haproxy logs
Hi @Sankar , if you haven't Asset and Identity management you cannot define Priority but only urgency: use it in your reports. Ciao. Giuseppe
Hi @rahulkumar , as I said, you have to use the INGEST_EVAL property in transforms.conf and the json_extract function to extract the metadata from your json, and at least take as _raw only the messa... See more...
Hi @rahulkumar , as I said, you have to use the INGEST_EVAL property in transforms.conf and the json_extract function to extract the metadata from your json, and at least take as _raw only the message. Please share your full expanded json logs in "Insert/Edit Code Sample" button to help you. Ciao. Giuseppe
Hi when you have a distributed environment it’s mandatory that all servers send their internal logs into indexer cluster as @VatsalJagani already pointed! Without that it’s really hard to debug and ... See more...
Hi when you have a distributed environment it’s mandatory that all servers send their internal logs into indexer cluster as @VatsalJagani already pointed! Without that it’s really hard to debug and monitoring this environment. You should also have MC and LM setting up in some node. Best to have own instance for those or use e.g. SH deployer for those. There are list of combinations which are allowed and supported by splunk in doc.splunk.com. r. Ismo
Hi it’s just like others already told. Maybe there is another way to fulfill your need, but this needs that we understand your real needs.  If this cannot do otherwise then you could try to present... See more...
Hi it’s just like others already told. Maybe there is another way to fulfill your need, but this needs that we understand your real needs.  If this cannot do otherwise then you could try to present your needs in ideas.splunk.com and hope that others give enough points to it. Then splunk could consider to create and deploy it in some future versions. I have heard that currently 70-80% of new features comes that way. r. Ismo
Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your ale... See more...
Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your alerts are used from _audit index. But I suppose that even then you will not get always exactly the same results! Why this happens? When you are ingesting data there are always some delays, it could be less than second or several minutes or even longer time depending on your environment, log sources and how those are integrated into Splunk. For that reason you should always use suitable earliest and latest values with suitable buffers on every alerts! And if there are some inputs where the latency could have too big variation, then you probably need to create two series of alerts for it. One which are trying to look it as well as possible for online/real time and second one which are taking care of those later coming events which haven’t realized by this real time alert. r. Ismo
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be som... See more...
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be some lines which could help us to see what was a real issue. But let’s start that migration part as it’s quite obvious that it has something to do with this issue! From where you migrated it and what is target environment? How do you do the migration? Was there any issues before migration? Anything else we should know? r. Ismo
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify ... See more...
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify that the permissions for the KV Store directories and files are set correctly. Incorrect permissions can prevent the KV Store from initializing. Set splunk.key to the default file permission. chmod 600 $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key Restart Splunk
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? ... See more...
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? Login first with Admin Splunk native account. Then fix the LDAP related issue. Check Splunk internal logs & LDAP configuration page. Is it Splunk native authentication? Then you might need to reset the creds.   For Mongod related errors you are seeing in the logs. As suggested by @splunkreal  please check the Splunk's internal logs to find the details on why mongodb service unable to start.   I hope this helps!!! Kindly upvote if it does!!!
@AShwin1119- I think its the same question here I have answered - https://community.splunk.com/t5/Monitoring-Splunk/indexer-cluster-to-SH-cluster-replication-issue/m-p/709746/highlight/true#M10687  ... See more...
@AShwin1119- I think its the same question here I have answered - https://community.splunk.com/t5/Monitoring-Splunk/indexer-cluster-to-SH-cluster-replication-issue/m-p/709746/highlight/true#M10687   I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Forwardsearchheaddata   I hope this helps!!! Kindly upvote if it does!!!!
@CrossWordKnower- When you say earliest=-8h@h , latest becomes now as you are not providing it. So number of results differ even when you run the search again manually, because your search will searc... See more...
@CrossWordKnower- When you say earliest=-8h@h , latest becomes now as you are not providing it. So number of results differ even when you run the search again manually, because your search will search new events coming in every time.   Try using static values of earliest & latest, for example earliest=01/22/2025:00:00:00 latest=01/23/2025:00:00:00 And in this scenario it should gave exactly the same count, regardless of its manual search or alert or when ever you search the search,   I hope this is understandable. Kindly upvote if it helps!!!
@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   ... See more...
@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   But most of the requirements that you might have should be able to fulfilled with existing stats command function. Kindly please describe exact use-case and community should be able to help you write query without writing custom command or function.