All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I totally agree what @PickleRick said. This is technically doable, but there isn’t any sense to do it. I know that there are some sites who want that internal logs are seen also e.g. power user, but I... See more...
I totally agree what @PickleRick said. This is technically doable, but there isn’t any sense to do it. I know that there are some sites who want that internal logs are seen also e.g. power user, but I don’t say that this is the perfect solution as it also generates some other concerns!
Good day,   I'm having an issue with an email dashboard I'm attempting to create in Splunk. This dashboard filters on the various email headers fields such as sender, recipient, subject, etc. One o... See more...
Good day,   I'm having an issue with an email dashboard I'm attempting to create in Splunk. This dashboard filters on the various email headers fields such as sender, recipient, subject, etc. One of these fields is the attachments field. The issue is that there is *alwasy* a sender, recipient, and subject....but not all emails have attachments nor do I always want to filter by it. In the dashboard, I'm using a text field with a default value of '*' . The problem with this is shown in the extract below.     index=email source=/var/email_0.log attachments=$file$ OR sha256=$hash$       This search will find all emails with attachments, but filter emails without any. However, what if I want search an email just by its subject while ignoring attachments? I'd love to be able to change the dashboard so that filtering by these fields could be turned on and off, but I haven't found a way to do that. I thought I could use isnotnull(attachments) inside a case() or if() function to test if the field exists, but those expressions don't appear to work in the base search. Does anyone have any insight into how I could change the search(or dashboard) so that I'm not always filtering by attachments? Perhaps by changing the default values? Or perhaps the regex command?
While I do understand that compliance people (I suppose that's where the idea ultimately comes from) sometimes have their reasons, sometimes they are a bit overzealous. Remember that _internal is - ... See more...
While I do understand that compliance people (I suppose that's where the idea ultimately comes from) sometimes have their reasons, sometimes they are a bit overzealous. Remember that _internal is - as the name says - Splunk's internal index. There should be only things relevant to Splunk's inner workings there. This index is not meant for non-admins access. So there should not be data there which is not obtainable by the admins anyway. So while technically, you should be able to mask some data out of your events, it might make troubleshooting more difficult (also supportability point raised by @isoutamo is a very good one). You must also remember that parsing (and all associated activities like SEDCMD) are done on first heavy component in event's path so you'd need to place the props/transforms on the search-head(s) which is(are) generating those alerts. And this is a very unintuitive place to look for such settings in case someone inherits your environment in the future. So while it is technically possible, I'd be hard pressed to call this a good idea.
Can you increment the checkpoint number by one before saving it using the helper functions in the add-on builder? This should prevent it from getting the last event multiple times when there are no n... See more...
Can you increment the checkpoint number by one before saving it using the helper functions in the add-on builder? This should prevent it from getting the last event multiple times when there are no new events after the last checkpoint.
I'm just telling you that it helps. I admit I got through the exam back when it was in beta phase without much practical experience using ES (with only some at administering it) and before I did my E... See more...
I'm just telling you that it helps. I admit I got through the exam back when it was in beta phase without much practical experience using ES (with only some at administering it) and before I did my ES trainings. But I have loads of experience with different security-related solutions and loads of experience with the core Splunk so I had a serious advantage here. You can of course try and see. After all, an exam, should you attempt it without taking the course, is cheaper than thr course isn't it? It's simply impossible to: 1) Assess your proficiency not knowing you and not having worked with you 2) Give you more precise pointers without violating the NDA. The certification flowchart lists the recommended trainings. You can check their outlines and decide for yourself whether you feel knowledgeable enough without taking the training to attempt the exam. The trainings are usually useful because typically when you're working with something on your own you might be simply not using some parts of the functionality or even not be aware of it because you've never needed that. So this certification as well as some certs in the Splunk Core track (up to and including Admin and (Advanced?) Power User) can be attempted at your discretion - without any required trainings. I don't think anyone can give you more precise info since - as I said - exam contents are covered with NDA. After all you're supposed to have knowledge to answer the question, not learn the answers mindlessly.
I have never try this, but basically it should work all other internal logs except _audit.  Try to use [splunkd] as a sourcetype or [source::…/var/log/splunk/splunkd*] based on which event you try t... See more...
I have never try this, but basically it should work all other internal logs except _audit.  Try to use [splunkd] as a sourcetype or [source::…/var/log/splunk/splunkd*] based on which event you try to mask. You should remember that source definitions override sourcetype definitions. BUT if you do this and you have any issues with splunk this probably gives a reason for splunk to denying full support to you before you remove that configuration.
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in inte... See more...
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in internal logs.   I'm able to mask the value at search time with this SPL. index=_internal action=webhook | rex field=url mode=sed "s/https?:\/\/www.domin.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g" | table url I tried to port this configuration to /opt/splunk/etc/system/local/ by creating a props.conf with the following. [sourcetype::_internal] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g AND [splunkd] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g Doesn't work. This is a standalone instance of Splunk running on a ec2 instance.  So my question is, is it even possible to filter splunk generated logs?  Should I funnel these to transforms.conf and do it there?  Is that possible? Any help or insight would be greatly appreciated
chicken / egg thing no? how to get real hands on experience as analyst without doing the training. I have the demo installed/ free version and did some of the tutorial ride along activities but not r... See more...
chicken / egg thing no? how to get real hands on experience as analyst without doing the training. I have the demo installed/ free version and did some of the tutorial ride along activities but not real world stuff, that seems a bit unrealistic. Are you suggesting to hold off on the exam until getting to that level of hands on work?
Well... Let's put it this way - as far as I remember none of the trainings are formally required. It helps - either this training or hands-on experience working with ES (and I mean experience; not ju... See more...
Well... Let's put it this way - as far as I remember none of the trainings are formally required. It helps - either this training or hands-on experience working with ES (and I mean experience; not just spinning up a lab environment if you by any chance have access to NFR license as a partner or something like that - a real work experience).
thank you, I did all the trainings in the curriculum path for this test with the exception of Using Splunk Enterprise Security as that one has a 1500 fee while all others are free!. Is that course ab... See more...
thank you, I did all the trainings in the curriculum path for this test with the exception of Using Splunk Enterprise Security as that one has a 1500 fee while all others are free!. Is that course absolutely required to be proficient enough to pass this exam? 
I'm having the same issues, any resolution?
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mea... See more...
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mean it has no device or user associated with it? or its missing some fields. my main event key includes the whole haproxy logs
Hi @Sankar , if you haven't Asset and Identity management you cannot define Priority but only urgency: use it in your reports. Ciao. Giuseppe
Hi @rahulkumar , as I said, you have to use the INGEST_EVAL property in transforms.conf and the json_extract function to extract the metadata from your json, and at least take as _raw only the messa... See more...
Hi @rahulkumar , as I said, you have to use the INGEST_EVAL property in transforms.conf and the json_extract function to extract the metadata from your json, and at least take as _raw only the message. Please share your full expanded json logs in "Insert/Edit Code Sample" button to help you. Ciao. Giuseppe
Hi when you have a distributed environment it’s mandatory that all servers send their internal logs into indexer cluster as @VatsalJagani already pointed! Without that it’s really hard to debug and ... See more...
Hi when you have a distributed environment it’s mandatory that all servers send their internal logs into indexer cluster as @VatsalJagani already pointed! Without that it’s really hard to debug and monitoring this environment. You should also have MC and LM setting up in some node. Best to have own instance for those or use e.g. SH deployer for those. There are list of combinations which are allowed and supported by splunk in doc.splunk.com. r. Ismo
Hi it’s just like others already told. Maybe there is another way to fulfill your need, but this needs that we understand your real needs.  If this cannot do otherwise then you could try to present... See more...
Hi it’s just like others already told. Maybe there is another way to fulfill your need, but this needs that we understand your real needs.  If this cannot do otherwise then you could try to present your needs in ideas.splunk.com and hope that others give enough points to it. Then splunk could consider to create and deploy it in some future versions. I have heard that currently 70-80% of new features comes that way. r. Ismo
Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your ale... See more...
Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your alerts are used from _audit index. But I suppose that even then you will not get always exactly the same results! Why this happens? When you are ingesting data there are always some delays, it could be less than second or several minutes or even longer time depending on your environment, log sources and how those are integrated into Splunk. For that reason you should always use suitable earliest and latest values with suitable buffers on every alerts! And if there are some inputs where the latency could have too big variation, then you probably need to create two series of alerts for it. One which are trying to look it as well as possible for online/real time and second one which are taking care of those later coming events which haven’t realized by this real time alert. r. Ismo
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be som... See more...
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be some lines which could help us to see what was a real issue. But let’s start that migration part as it’s quite obvious that it has something to do with this issue! From where you migrated it and what is target environment? How do you do the migration? Was there any issues before migration? Anything else we should know? r. Ismo
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify ... See more...
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify that the permissions for the KV Store directories and files are set correctly. Incorrect permissions can prevent the KV Store from initializing. Set splunk.key to the default file permission. chmod 600 $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key Restart Splunk
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? ... See more...
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? Login first with Admin Splunk native account. Then fix the LDAP related issue. Check Splunk internal logs & LDAP configuration page. Is it Splunk native authentication? Then you might need to reset the creds.   For Mongod related errors you are seeing in the logs. As suggested by @splunkreal  please check the Splunk's internal logs to find the details on why mongodb service unable to start.   I hope this helps!!! Kindly upvote if it does!!!