All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes. I understand. They are not "cloned", they are redirected. The events are sent to _all_ output groups specified in the outputs.conf (or to the specified output group(s), if you manipulated _TCP_... See more...
Yes. I understand. They are not "cloned", they are redirected. The events are sent to _all_ output groups specified in the outputs.conf (or to the specified output group(s), if you manipulated _TCP_ROUTING manually). Within each of applicable group the event is sent to just one of the servers configured in such group. So you must make sure that the events you want have both output groups specified in _TCP_ROUTING.
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associatio... See more...
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associations from the filters and entering the ShortID you were looking for, but the new Incident Review dashboard appears to have taken this functionality away. Is there any way to achieve this?
Hi PickleRick, thanks for your response and time. The cloned logs are routing only to one instance, specified into the outputs.conf The "original" logs, not the cloned one are directed to my lo... See more...
Hi PickleRick, thanks for your response and time. The cloned logs are routing only to one instance, specified into the outputs.conf The "original" logs, not the cloned one are directed to my local indexers, just the cloned sourcetype is directed with another heavy forwarder specified in the outputs.conf placed in the same app as the props and transforms. Not sure if i'm clear
The "convert mktime()" could also be the way to go but you need to specify the time format with... the "timeformat=" option. Otherwise Splunk has to guess and usually guesses wrong.
Please, don't dig out old threads. Let them rest in peace But seriously, to gain more visibility, you should just make a new thread, possibly linking to any informations you already found for ref... See more...
Please, don't dig out old threads. Let them rest in peace But seriously, to gain more visibility, you should just make a new thread, possibly linking to any informations you already found for reference. But to the point - if all else fails, you can always create your own script using Selenium and emulate a user clicking through your Sharepoint share and downloading the files but it's a very very ugly idea.
You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed ... See more...
You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed to take - these can warrant a different approach to the problem. For example, since you're dealing with email data, it's a relatively valid question why aren't you using CIM datamodel (and have it accelerated).
10k results, not 50k. The 50k results limit is for join command. "Normal" subsearch has a default 10k results limit. (yes, all those limits can be confusing and are easy to mistake with one another).
Could you please try set parameter resource_to_telemetry_conversion to true? exporters: prometheus: endpoint: "1.2.3.4:1234" [..] resource_to_telemetry_conversion: enabled: true... See more...
Could you please try set parameter resource_to_telemetry_conversion to true? exporters: prometheus: endpoint: "1.2.3.4:1234" [..] resource_to_telemetry_conversion: enabled: true opentelemetry-collector-contrib/exporter/prometheusexporter at main · open-telemetry/opentelemetry-collector-contrib · GitHub
Hi,  thanks for the responses. Much appreciated.    We have done the blacklists as its easier to do under our Change Board ( we have preauths) and need a longer period of time to do major changes ... See more...
Hi,  thanks for the responses. Much appreciated.    We have done the blacklists as its easier to do under our Change Board ( we have preauths) and need a longer period of time to do major changes like the sysmon one. So went, in the short term, with the blacklist.    Found that I have to alter the regex slightly to get it working, then ive waited around a week for all devices to check in with the DS and get the new config. Strange though, even with the new inputs.conf the devices still push out logs for a few hours then nothing. Actually expected a full blown STOP. But hey ho. 
Hi @BigJohnQ , your first solution or the one from @ITWhisperer are the most efficient if in the subsearch you have less than 50,000 results. If instead you could have in the subsearch more than 50... See more...
Hi @BigJohnQ , your first solution or the one from @ITWhisperer are the most efficient if in the subsearch you have less than 50,000 results. If instead you could have in the subsearch more than 50,000 results you should try another solution: index IN (email1,email2) sourcetype=my_sourcetype source_user=* | stats dc(index) AS index_count values(*) AS * BY source_user | where index_count>1 you can replace the values(*) AS * with the list of all fields you need to have in the results. Avoid you second solution because it's very slow! Ciao. Giuseppe
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific me... See more...
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific message it was unable to be replied to or reacted to. Strangely enough viewing the message on a mobile would allow you to reply and react to it. Every other alert message before and after we have been able to be reply to.  
Hi, do you have any solutions? I'm trying to upload files from sharepoint to splunk enterprise as well.
Thank you
Thank you! It did help 
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total | foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2... See more...
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total | foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2) ] Note that the _ in front of the total field name prevents it from being displayed, then the foreach command just calculates the percentages.  
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would prod... See more...
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would produce more metrics and would need to be aggregated on consumption. However, you could create N searches where the body of a search is a single macro, which runs your base SPL and you call the macro with the device id prefixes you want to search for. Not an elegant solution - but functional.  I don't understand the message you say you are getting though - I am not familiar with that - secondly what is the impact of that message occurring - does it break the collected data in some way and does it stop other searches from working?
Thanks ITWhisperer! I did try the string conversion, but it did not work. This looks like it did the trick!
Try something like this index=email2 sourcetype=my_sourcetype source_user=* [ search index=email1 sourcetype=my_sourcetype source_user=* | eval recipient = source_user | fields recipient | dedup rec... See more...
Try something like this index=email2 sourcetype=my_sourcetype source_user=* [ search index=email1 sourcetype=my_sourcetype source_user=* | eval recipient = source_user | fields recipient | dedup recipient | format]
| eval IN = strptime(in, "%Y%m%d%H%M%S") | eval OUT = strptime(out, "%Y%m%d%H%M%S") | eval Duration = tostring(OUT - IN,"duration")