All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can see my earlier comment.  Essentially you don't need append/stats for this job. index=dhcp_source_index NOT [search index=sysmon_index | stats values(host) as host] If you only want to k... See more...
You can see my earlier comment.  Essentially you don't need append/stats for this job. index=dhcp_source_index NOT [search index=sysmon_index | stats values(host) as host] If you only want to know which DHCP hosts  are not in sysmon_index, add stats values(host) or stats count by host  after this search.
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of th... See more...
Has anyone tried this add-on to pull the tfs commits into Splunk via Azure DevOps (Git Activity) - Technical Add-On. I tried installing this app on one of the heavy forwarder but inputs section of this add-on does not work.
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of ... See more...
I would like to predict memory ,cpu and storage usage of my splunk servers ( Indexers, search heads, )  step wise plan is to first do an analysis of current usage and then predict 6 months usage of my own splunk platform ( Like indexers , search  heads , heavy forwarders) 
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, ... See more...
Hello, I'm facing an issue with dashboards graphs. When checking the graphs from metric browser, all the data are showing fine. See below. But when we create an dashboard with same data, we see some gaps. See below. Could someone have an idea why this is happening?
The "Bad request for url..." verbiage typically points to an invalid webhook address.  Make sure the URL of the webhook is publically accessible, is addressable with HTTPS, and doesn't contain any pr... See more...
The "Bad request for url..." verbiage typically points to an invalid webhook address.  Make sure the URL of the webhook is publically accessible, is addressable with HTTPS, and doesn't contain any private certificates in the chain.  This Lantern article (with a video walkthrough) may be helpful => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_the_Microsoft_Teams_Add-on_for_Splunk   As, an alternative, you can use Azure Functions to get the same call record data.  This way, you don't have to have the webhook on your forwarder.  Instead, all the plumbing happens in Azure and the data is pushed to Splunk via HEC.  Here is a Lantern article on that => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Teams_call_record_data_and_Azure_Functions
Hello community, how can I make a playbbok run every 5 minutes automatically?
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's... See more...
I've been searching for awhile, but I haven't been able to find how to access an alert's description from within my add-on's alert action Python code. I'm using helper.get_events() to get the alert's triggered events and helper.settings to get the title of the alert. Both are from https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/PythonHelperFunctions. That documentation page doesn't seem to list any way to pull an alert's description though. Does anyone know where it's stored/how to access it?
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  p... See more...
I am working on a playbook where there is a need to copy the current event's artifacts  into a separate open and existing case.  We are looking for a way to automate this through phantom.collect +  phantom.add_artifact or other means. We have a way to pass in the existing case id  and need a solution to duplicate atrifacts from running event into that case specified by case id. 
Hi Giuseppe, I appreciate the lightning fast answer and I agree, there is a multitude of  logs to choose from. That's kind of the problem.  I will most certainly look at the link you supplied but I... See more...
Hi Giuseppe, I appreciate the lightning fast answer and I agree, there is a multitude of  logs to choose from. That's kind of the problem.  I will most certainly look at the link you supplied but I was trying to find out which logs other people feel work best for them.  In the mean time I will have a look around the content on your link. Ciao Norm 
Hi @mmcap , uing the Splunk Ta Windows (https://splunkbase.splunk.com/app/742) you can monitor many things on a windows devoce (server or cliente). If you have security requisites, the first data s... See more...
Hi @mmcap , uing the Splunk Ta Windows (https://splunkbase.splunk.com/app/742) you can monitor many things on a windows devoce (server or cliente). If you have security requisites, the first data source should be wineventlog:security. But there are many other sources that could be interesting. As I said, open the Add-On and see the possible inputs you have so you can choose the one you could require. Ciao. Giuseppe
When monitoring Windows systems which logs do you find to give the best information for finding security events and then tracking down the event from start to finish?
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in anothe... See more...
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in another dashboard. To my understanding, the equivalent of $click.name2$ in XML should be "$name" in Dashboard Studio; however, when I use "$name" the correct value is not returning. What would be the equivalent of "$click.name2" in Dashboard Studio? This is for a single value.
Thanks for all your help and advices.  I will try the rest command on the MC as u suggest tomorow, im back home now. Normally MC is well configured. I will update after searching. But I agree with... See more...
Thanks for all your help and advices.  I will try the rest command on the MC as u suggest tomorow, im back home now. Normally MC is well configured. I will update after searching. But I agree with you, I think I will have to open a case @splunk Best regards
Hi, thanks for the reply. To simplify it let us say we have two lists of items of same type it could be anything. How can we compare both lists and list only the subset of items not common to both ... See more...
Hi, thanks for the reply. To simplify it let us say we have two lists of items of same type it could be anything. How can we compare both lists and list only the subset of items not common to both lists. Regards, D
Has anyone tried this against a kvstore collection?  Seems to break in that case, when you have mv fields
After some research I could verify the I need to make an indexed Lookup, so the fields will be indexes together with the data.
If you don't observe performance degradation, you needn't worry about it.
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf#Timestamp_extraction_configuration You have the TZ parameter which you can use to "bind" a predefined timezone to a particular sou... See more...
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf#Timestamp_extraction_configuration You have the TZ parameter which you can use to "bind" a predefined timezone to a particular sourcetype, source or host. But it's always best if the source specifies the timezone within the timestamp itself - it saves you some work and possibly much grief later.
But what do you mean by asset? What in your data tells you that this is one "asset" and this is another one? Is it the host field or some other field within your data? Or any combination of fields?
Sorry, should have mentioned what was pretty obvious to me, the rest command you should have run on the MC - properly configured MC should have access to all your components. But you should have call... See more...
Sorry, should have mentioned what was pretty obvious to me, the rest command you should have run on the MC - properly configured MC should have access to all your components. But you should have called the rest call from MC _against_ your CM. But still if you're stuck in that "no candidates" state, I'd suggest opening a support case.