You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed ...
See more...
You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed to take - these can warrant a different approach to the problem. For example, since you're dealing with email data, it's a relatively valid question why aren't you using CIM datamodel (and have it accelerated).
10k results, not 50k. The 50k results limit is for join command. "Normal" subsearch has a default 10k results limit. (yes, all those limits can be confusing and are easy to mistake with one another).
Could you please try set parameter resource_to_telemetry_conversion to true? exporters:
prometheus:
endpoint: "1.2.3.4:1234" [..]
resource_to_telemetry_conversion:
enabled: true...
See more...
Could you please try set parameter resource_to_telemetry_conversion to true? exporters:
prometheus:
endpoint: "1.2.3.4:1234" [..]
resource_to_telemetry_conversion:
enabled: true opentelemetry-collector-contrib/exporter/prometheusexporter at main · open-telemetry/opentelemetry-collector-contrib · GitHub
Hi, thanks for the responses. Much appreciated. We have done the blacklists as its easier to do under our Change Board ( we have preauths) and need a longer period of time to do major changes ...
See more...
Hi, thanks for the responses. Much appreciated. We have done the blacklists as its easier to do under our Change Board ( we have preauths) and need a longer period of time to do major changes like the sysmon one. So went, in the short term, with the blacklist. Found that I have to alter the regex slightly to get it working, then ive waited around a week for all devices to check in with the DS and get the new config. Strange though, even with the new inputs.conf the devices still push out logs for a few hours then nothing. Actually expected a full blown STOP. But hey ho.
Hi @BigJohnQ , your first solution or the one from @ITWhisperer are the most efficient if in the subsearch you have less than 50,000 results. If instead you could have in the subsearch more than 50...
See more...
Hi @BigJohnQ , your first solution or the one from @ITWhisperer are the most efficient if in the subsearch you have less than 50,000 results. If instead you could have in the subsearch more than 50,000 results you should try another solution: index IN (email1,email2) sourcetype=my_sourcetype source_user=*
| stats dc(index) AS index_count values(*) AS * BY source_user
| where index_count>1 you can replace the values(*) AS * with the list of all fields you need to have in the results. Avoid you second solution because it's very slow! Ciao. Giuseppe
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific me...
See more...
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific message it was unable to be replied to or reacted to. Strangely enough viewing the message on a mobile would allow you to reply and react to it. Every other alert message before and after we have been able to be reply to.
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total
| foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2...
See more...
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total
| foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2) ] Note that the _ in front of the total field name prevents it from being displayed, then the foreach command just calculates the percentages.
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would prod...
See more...
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would produce more metrics and would need to be aggregated on consumption. However, you could create N searches where the body of a search is a single macro, which runs your base SPL and you call the macro with the device id prefixes you want to search for. Not an elegant solution - but functional. I don't understand the message you say you are getting though - I am not familiar with that - secondly what is the impact of that message occurring - does it break the collected data in some way and does it stop other searches from working?
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS...
See more...
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS IN | convert mktime(out) AS OUT | eval Duration =OUT - IN I have not been able to find a function that would directly convert number to time or if there is some multifunctional way to get the right duration between the two, But this does not perform the correct time math.
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I ...
See more...
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I tried to do it this way: from index=email1 I take the fields src_user and recipient and use the appropriate search to look for it in the email2 index. Query examples that I used:
index=email1 sourcetype=my_sourcetype source_user=*
[ search index=email2 sourcetype=my_sourcetype source_user=* | fields source_user ]
OR
index=email1 sourcetype=my_sourcetype
| join src_user, recipient [search index=emai2 *filters*]
Everything looked OK in the control sample (I found events in a 10-minute window, e.g. 06:00-06:10), which at first glance matched, but when I extended the search time, e.g. to 24h, it did not show me any events, even those that matched in a short time window (even though they were in these 24 hours).
Thank you for any ideas or solutions for this case.
The Splunk OVA for VMware Metrics documentation at https://docs.splunk.com/Documentation/OVAVMWmetrics/4.3.0/Config/About describes its operating system and update policy: OS: Red Hat Enterprise Lin...
See more...
The Splunk OVA for VMware Metrics documentation at https://docs.splunk.com/Documentation/OVAVMWmetrics/4.3.0/Config/About describes its operating system and update policy: OS: Red Hat Enterprise Linux release 9.2 (Plow) OS Update Policy: "You're responsible for the patches introduced in the operating system installed on the OVA. Make sure to regularly update the operating system to avoid vulnerabilities. There is no backward compatibility for the OVA." Splunk version: 9.1.0.2 Unfortunately the docs don't describe a process for updating the Splunk version. It is likely that you technically could update the splunk installation on it, but it is not officially supported.
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly. But for some reason this portion of IIS logs just doesn't work...
See more...
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly. But for some reason this portion of IIS logs just doesn't work properly. I would have to look into the higher priority, however other IIS sourcetype logs aren't turning out this way. I do know that the props.conf is in the correct spot. When we stood up Splunk initially there were custom written apps rather than that of the Splunk Supported TA for IIS. I may go that route if I can't get this resolved via our custom app.
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events an...
See more...
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events and discard the rest" section, you can find stanzas for props.conf and transforms.conf files that you can place in an app on your indexing machines. Setting the regex of the setparsing stanza to "some message" would give you only the events containing that "some message", and discard the rest. # In props.conf
[source::/your/log/file/path]
TRANSFORMS-set= setnull,setparsing
# In transforms.conf:
[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
[setparsing]
REGEX = some message
DEST_KEY = queue
FORMAT = indexQueue
(It is assumed that you already have a working inputs.conf file to get the logs into your indexing machines. You can also set the stanza name in the props.conf file to use your log sourcetype)