All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I understand your question here, I believe adding something like this to your stats aggregation can give you additional fields you can use to filter on and only include the Ids that have events oc... See more...
If I understand your question here, I believe adding something like this to your stats aggregation can give you additional fields you can use to filter on and only include the Ids that have events occurring from each 3 of the scenarios you have separated with ORs in the original search.     index="B" AND (logType="REQUEST" OR (logType="TRACES" AND message IN ("searchString1*", "searchString2*"))) ``` In the below stats aggregation the max(eval(if())) functions are checking if a specific event matches a condition inside your if statement. If there is at least a single event that matches the criteria for a specific 'Id' then this value will be 1. If the condition is not met for an 'Id' then it will be a 0. ``` | stats max(eval(if('logType'=="REQUEST", 1, 0))) as has_request_log, max(eval(if('logType'=="TRACES" AND like(message, "searchString1%"), 1, 0))) as has_trace_type_1, max(eval(if('logType'=="TRACES" AND like(message, "searchString2%"), 1, 0))) as has_trace_type_2, values(message) as messages, latest(*) as * by Id ``` Only include the Ids where there were events from all 3 of these search criteria ``` | where 'has_request_log'==1 AND 'has_trace_type_1'==1 AND 'has_trace_type_2'==1   Alternatively, you can just classify the log types before the stats aggregation and do your filtration based off of that field. index="B" AND (logType="REQUEST" OR (logType="TRACES" AND message IN ("searchString1*", "searchString2*"))) ``` Eval to classify the logs that are returned from your search to a field named 'event_category' ``` | eval event_category=case( 'logType'=="REQUEST", "Request", 'logType'=="TRACES" AND LIKE(message, "searchString1%"), "Traces_1", 'logType'=="TRACES" AND LIKE(message, "searchString2%"), "Traces_2" ) ``` Group all unique values of 'event_category' seen for each Id ``` | stats values(event_category) as event_category values(message) as messages, latest(*) as * by Id ``` Only include the Ids where there were events from all 3 of these search criteria. mvcount() function checks how many values are contained withing the field, since we used a values(event_category) as event_category we only want the Ids that have all 3 unique classifications ``` | where mvcount(event_category)>=3
thanks @richgalloway  for the response! This indeed helps. Can I extend the question also to understand how I can enforce that the individual searches between the OR conditions return result for sure... See more...
thanks @richgalloway  for the response! This indeed helps. Can I extend the question also to understand how I can enforce that the individual searches between the OR conditions return result for sure and only then combine the results (similar to inner join) using Id field?
It sounds like you need the values function.   (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | sta... See more...
It sounds like you need the values function.   (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by Id  
I am new to splunk queries and was trying to combine results from multiple queries without using subsearches due to its limitation of restricting subsearches to 50000 results but our dataset has more... See more...
I am new to splunk queries and was trying to combine results from multiple queries without using subsearches due to its limitation of restricting subsearches to 50000 results but our dataset has more than 50000 records to be considered. Below is the query I was trying (index="B"  logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats latest(*) as * by Id All above queries have the id field in the result which match and correspond to some kind of a correlation id between these logs. I would like to have the end result show all the common fields which has same values, but also with message field having the consolidated message content from the individual queries made on the same index B. The message field alone can have different values between the queries and need to be consolidated on the result. Can someone help on how this can be done ? @splunk 
Some of my customers are using Splunk as their SIEM solution. I have a security platform that needs to integrate into their Splunk to send security events (probably syslog) into a certain index (mig... See more...
Some of my customers are using Splunk as their SIEM solution. I have a security platform that needs to integrate into their Splunk to send security events (probably syslog) into a certain index (might be an existing or brand new one). I already made a PoC using HEC and successfully managed to deliver my syslog events into an index in my test Splunk account (using Splunk Cloud Platform). The setup process that my customers will have to do for the integration using HEC is to create a new data input, create a token, and eventually deliver it to me (alongside their Splunk hostname). Now I'm wondering if this process can somehow be simplified using an app/add-on. Not sure exactly what is functionality using an add-on gives and if I can somehow leverage it in order to simplify the integration onboarding process between my security product and my customers. Is there anything else I should consider? Would love to know, I'm completely new to Splunk. Also, case it matters, most of my customers, are using Splunk Cloud Platform but in the future there might be customers that will have Splunk Enterprise, case it matters. Thanks  
I want to use the reset password action in Splunk Soar, but it doesn't work and gives this error message. handle_action exception occurred. Error string: ''LDAPInvalidDnError' object has no attribut... See more...
I want to use the reset password action in Splunk Soar, but it doesn't work and gives this error message. handle_action exception occurred. Error string: ''LDAPInvalidDnError' object has no attribute 'description''
{"severity":"INFO","ts":1704101563.224535,"logger":"controller","msg":"Seccomp profile 'not configured' is not allowed for container 'splunk-fluentd-k8s-objects'. Found at: no explicit profile found.... See more...
{"severity":"INFO","ts":1704101563.224535,"logger":"controller","msg":"Seccomp profile 'not configured' is not allowed for container 'splunk-fluentd-k8s-objects'. Found at: no explicit profile found. Allowed profiles: {\"RuntimeDefault\", \"docker/default\", \"runtime/default\"}","process":"audit","audit_id":"2024-01-01T09:32:31Z","details":{},"event_type":"violation_audited","constraint_group":"constraints.gatekeeper.sh","constraint_api_version":"v1beta1","constraint_kind":"K8sPSPSeccomp","constraint_name":"cis-k8s-v1.5.1-psp-seccomp-default","constraint_namespace":"","constraint_action":"warn","resource_group":"","resource_api_version":"v1","resource_kind":"Pod","resource_namespace":"idmzct0-ito-utils-splunkdc-callsign","resource_name":"gkeusr-idmzc-dev-tier0-01-splunk-kubernetes-objects-5686d96j7nj","resource_labels":{"app":"splunk-kubernetes-objects","engine":"fluentd","pod-template-hash":"5686d96bd8","release":"gkeusr-idmzc-dev-tier0-01"}} Show syntax highlighted cluster_name = gkeusr-idmzc-dev-tier0-01container_name = managerhost = npool-cos-apps-medium-7b7dd5cdb8-s6lrpnamespace = gatekeeper-systempod = gatekeeper-audit-789888c597-q9vt8severity = INFOsource = /var/log/containers/gatekeeper-audit-789888c597-q9vt8_gatekeeper-system_manager-da5f687a6b53035c4299f8e3c5cc941c510756de883f2f0e68e783cd4edc7191.logsourcetype = kube:container:manager
Hi @syaseensplunk, yes, it's correct, the location is on Indexers, even if I don't like to have te inputs directly on Indexers, I prefer to have a dedicated Heavy Forwarder (better two with a Load B... See more...
Hi @syaseensplunk, yes, it's correct, the location is on Indexers, even if I don't like to have te inputs directly on Indexers, I prefer to have a dedicated Heavy Forwarder (better two with a Load Balancer for HA), so coming beck to your issue, it's anoter one: could you share sample of your logs, to check the regex? Ciao. Giuseppe
For such events, if they are in valid JSON format, Splunk may automatically extract the fields.   If not, you could also try the field extraction wizard in Splunk, which should be able to generate ... See more...
For such events, if they are in valid JSON format, Splunk may automatically extract the fields.   If not, you could also try the field extraction wizard in Splunk, which should be able to generate a working regex for you if you select the fields you want.   If not, this one may work for your purpose, but it assumes that there are no empty fields: id":"(?<id>[^"]*)","referenceNumber":"(?<referenceNumber>[^"]*)","formId":"(?<formId>[^"]*)"
I have splunk connect in kubernetes which is responsible for forwarding the logs directly to the indexers using HEC token. Hope this helps!! the props.conf and transforms.conf should be on the index... See more...
I have splunk connect in kubernetes which is responsible for forwarding the logs directly to the indexers using HEC token. Hope this helps!! the props.conf and transforms.conf should be on the indexer layer to process the incoming data from Kubernetes via splunk connect. - this is my understanding
Hi Support,   Can you please help me for field extraction  id reference number  and formid {"id":"0fb56c6a-39a6-402b-8f07-8b889a46e3e8","referenceNumber":"UOB-SG-20240101-452137857","formId":"sg-p... See more...
Hi Support,   Can you please help me for field extraction  id reference number  and formid {"id":"0fb56c6a-39a6-402b-8f07-8b889a46e3e8","referenceNumber":"UOB-SG-20240101-452137857","formId":"sg-pfs-save-savings-festival"}     Thanks, Hari
I see in your original post that you mention searching over the last 7 days but your SPL has hardcoded "earliest=-1h" in it. This will override the timerange input into the time selector. I also hav... See more...
I see in your original post that you mention searching over the last 7 days but your SPL has hardcoded "earliest=-1h" in it. This will override the timerange input into the time selector. I also have some Windows event logs indexing in my local instance and by default, it looks like it is the source=WinEventLog:Security and sourcetype=WinEventLog So maybe try updating your search to something like this and see if you get expected results. index=<your_index> sourcetype=WinEventLog source="WinEventLog:Security" Account_Name=maxwell EventCode=4740 host IN ("dctr01*", "dctr02*", "dctr03*", "dctr04*") earliest=-7d@d latest=now | table _time Caller_Computer_Name Account_Name EventCode Source_Network_Address Workstation_Name
Looks good. I'll check it. I also thought of using EVAL after extraction to replace NULLs and spaces: EVAL-OldType=if(isnull(OldType) OR OldType = " ", "noData", OldType) EVAL-NewType=if(isnull(New... See more...
Looks good. I'll check it. I also thought of using EVAL after extraction to replace NULLs and spaces: EVAL-OldType=if(isnull(OldType) OR OldType = " ", "noData", OldType) EVAL-NewType=if(isnull(NewType) OR NewType = " ", "noData", NewType)
Hi @rolypolytoyy, Which versions of Splunk, MLTK, and PSC do you have installed? See https://docs.splunk.com/Documentation/MLApp/latest/User/MLTKversiondepends#Version_matrix for a compatibility mat... See more...
Hi @rolypolytoyy, Which versions of Splunk, MLTK, and PSC do you have installed? See https://docs.splunk.com/Documentation/MLApp/latest/User/MLTKversiondepends#Version_matrix for a compatibility matrix. At a glance, when _arpack.<build>.pyd loads, it can't find the exports it needs in dependent DLLs, e.g. mkl_rt.1.dll, python38.dll, etc.
--with the caveat that range() values are always positive, i.e. abs(x-y).
Hi @bhava2704, Given your sample data: | makeresults format=csv data="Name,perc,date xxx,90,28-Dec-23 yyy,91,28-Dec-23 zzz,92,28-Dec-23 xxx,96,29-Dec-23 yyy,97,29-Dec-23 zzz,98,29-Dec-23" | ... See more...
Hi @bhava2704, Given your sample data: | makeresults format=csv data="Name,perc,date xxx,90,28-Dec-23 yyy,91,28-Dec-23 zzz,92,28-Dec-23 xxx,96,29-Dec-23 yyy,97,29-Dec-23 zzz,98,29-Dec-23" | eval _time=strptime(date, "%d-%b-%y") you can use streamstats, timechart and autoregress, timechart and timewrap, etc. The timewrap command depends on the search earliest and latest times, so I've set them to 2023-12-28 and 2023-12-29, respectively. When using streamstats, be mindful of the event order. In the example, your results are sorted by Date/_time ascending. In a normal event search, your results will be sorted by _time descending, and you'll need to adjust streamstats etc. arguments accordingly. | streamstats global=f window=2 first(perc) as perc_p1 by Name | eval delta_perc=perc-perc_p1 or | timechart fixedrange=f span=1d values(perc) by Name | autoregress xxx p=1 | autoregress yyy p=1 | autoregress zzz p=1 | eval delta_xxx=xxx-xxx_p1, delta_yyy=yyy-yyy_p1, delta_zzz=zzz-zzz_p1 or | timechart fixedrange=f span=1d values(perc) by Name | timewrap 1d | eval delta_xxx=xxx_latest_day-xxx_1day_before, delta_yyy=yyy_latest_day-yyy_1day_before, delta_zzz=zzz_latest_day-zzz_1day_before  
Do you mean something like this? | stats range(perc) as range by Name
The delta command seems like it goes in the right direction, but the only problem is that it can't be told to do separate deltas based on the values of other fields. If you don't have too many differ... See more...
The delta command seems like it goes in the right direction, but the only problem is that it can't be told to do separate deltas based on the values of other fields. If you don't have too many different Name fields, you could separate the perc fields into differently named fields and then do deltas on each one. This also requires you to sort by Name and then sort back to whichever your preferred sorting order is after the delta operation. | sort Name | eval perc_xxx = if(Name="xxx",perc,perc_xxx) | eval perc_yyy = if(Name="yyy",perc,perc_yyy) | eval perc_zzz = if(Name="zzz",perc,perc_zzz) | delta perc_xxx as delta_perc | delta perc_yyy as delta_perc | delta perc_zzz as delta_perc | fields - perc_* | sort date  
max_memtable_bytes is still relevant to performance when using large lookup files, but it has nothing to do with regular expressions.
Hi @surajsplunkd, If the host is restarted or the forwarder service is restarted when the hostname changes, you can configure Splunk to manage this case automatically by setting host = $decideOnStar... See more...
Hi @surajsplunkd, If the host is restarted or the forwarder service is restarted when the hostname changes, you can configure Splunk to manage this case automatically by setting host = $decideOnStartup. See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#GLOBAL_SETTINGS for more information. Restarting Splunk when an online hostname change occurs is distribution dependent.