All Topics

Top

All Topics

We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that th... See more...
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that the metric indexes cannot be optimised fast enough. A typical query looks like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | bin _time AS day span=24h aligntime=@d+3h | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId     The main contributor to the large number of events is the cardinality of deviceId (~100k) which effectively is a "MAC" address with a common prefix and defined length. I could create 4 / 8 /16 reports each selecting a subset of deviceIds and schedule them at different times, but it would be quite a burden to maintain those basicly identical copies. So... I wonder if there is a mechanism to shard the search results and feed them it into many separate mcollects that are spaced apart by some delay. Something like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | shard by deviceId bins=10 sleep=60s | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId   Maybe my pseudo code above is not so clear. What I would like to achieve is, that instead of one huge mcollect I get 10 mcollects (each for a approximately 1/10th of the events). They should be scheduled approximately 60s apart from each other...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy... See more...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy forwarder to receive the logs, store the logs in a indexer, and same heavy forwarder will clone the sourcetype to forward the cloned one into a distant heavy forward, that I don't managed. Here is my config : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false This works pretty well, and all logs are stored into my indexer Now will come the cloning part :  [props.conf] [log_sourcetype] TRANSFORMS-log_sourcetype-clone = log_sourcetype-clone [transforms.conf] [log_sourcetype-clone] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = distant_HF_formylogs [outputs.conf] => for cloned logs [tcpout:distant_HF_formylogs] server = ip_of_distant_HF:port sendCookedData = false This configuration is used for another use case, as sometimes I have had to anonymize some logs. However, for this particular use case, when I activate the cloning part, it stops the complete log flow for this use case, even on the local indexers. I didn't quite understand why, because I don't see the difference with my other use case, apart from the fact that the logs are UDP logs and not TCP. Am I missing something? Thanks a lot for your help
In a Splunk cluster environment we have AD groups created in Azure in  each of the application, now we have created a new custom role with 2 extra capabilities with user default role. And we are assi... See more...
In a Splunk cluster environment we have AD groups created in Azure in  each of the application, now we have created a new custom role with 2 extra capabilities with user default role. And we are assigning this role to an AD group but we can see here like example if a AD group contains 20users the role is reflecting only 5 users today and tomorrow 10 users but it is not reflecting to all the users..  Please help me to assign a role to all the users #Splunk #ADgroupassignment
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Obse... See more...
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Observability dashboard, unfortunately, I don't seem to be able to drill down to any specific page using the dashboard. from what research I've done it seems like I may have to manually add in thousands of RUM URL groupings to drill down further but I have a feeling that that shouldn't be correct?
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will ... See more...
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will be present? How can I get these logs to Splunk using Add on? Thank You!  
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:4... See more...
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: RequestsDependencyWarning) 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: RequestsDependencyWarning) 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: RequestsDependencyWarning) 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: RequestsDependencyWarning) OS is ubuntu fully patched. 
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/i... See more...
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/idca-admin/idca-admin.log FICHERO_LOG2 = /any/log1/id/log1/any1.log FICHERO_LOG3 = /any/log1/httpd/*   Event2: FICHERO_LOG1 = /any/log2/id/id.log FICHERO_LOG2 = /any/log2/logging.log FICHERO_LOG3 = /any/log2/tree/httpd/ds/log2/* FICHERO_LOG4 = /any/log2/id/id-batch/id-batch2.log   eventN FICHERO_LOG1 = /any/logN/data1/activemq.log FICHERO_LOG2 = /any/logN/id/hss2/*.system.log ……… FICHERO_LOGN = /any/path1/id/…./*…..log   The result I expect is: For Event1   key values   LOG= /any/log1/id/idca-admin/idca-admin.log     /any/log1/id/log1/any1.log     /any/log1/httpd/*                for Event2:   key values   LOG= /any/log2/id/id.log     /any/log2/logging.log       /any/log2/tree/httpd/ds/log2/*     /any/log2/id/idca-batch/idca-batch2.log     For event N   key values   LOG= /any/logN/data1/activemq.log     /any/logN/id/hss2/*.system.log       …….     /any/path1/id/…./*…..log   I have tried with   transform.conf: [my-log] REGEX=^.*FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV-AD=true props.conf [extractingFields] TRANSFORM = other_transforms_stanza, my-log       But it's not working.   Any ideas or help? What steps should I follow?   Thanks JAR
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk ... See more...
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk updates. Does anyone know this? Regards, Bernhard
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$c... See more...
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice value="Green">Green</choice> <choice value="Orange">Orange</choice> <choice value="Red">Red</choice> <initialValue>*</initialValue> <search> <query/> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> </input>      
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the ... See more...
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the workload feature is not supported on the windows system, and it is obviously disabled How can I get rid of this annoying message in the splunkd.log?
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled se... See more...
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled search which runs and check all the macros. But it is utilizing lot of memory and even  searches are failing. Please help me with a better way to get the list of all indexes mapped in CIM Macros.   cim_Authentication_indexes cim_Alerts_indexes cim_Change_indexes cim_Endpoint_indexes cim_Intrusion_Detection_indexes cim_Malware_indexes cim_Network_Resolution_indexes cim_Network_Sessions_indexes cim_Network_Traffic_indexes cim_Vulnerabilities_indexes cim_Web_indexes    
how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word ... See more...
how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word found for that min alert will be triggered. all the vms and server is working fine. every week atleast once getting this alert.
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d c... See more...
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d count by audit   However, I'd like to visualize this data as percentages instead. Is it possible to modify the search to display the percentage of compliant and non-compliant events on top of each bar? Thanks in advance for your help!
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions y... See more...
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions you may have. Thank you in advance for your help. Source=bluecoat
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a ... See more...
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a certain sourcetype, line-breaking is off, and when looking for the sourcetype on the indexers (via support) and on the on-prem HF where the data is ingested and I cannot find the props for this sourcetype. So I wonder is it possible to have no sourcetype anywhere for a particular source?
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in inte... See more...
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in interesting fields.As well as i attched the screenshot of Splunk UI results in the attachment. Please guide me what i need to change in the setting? [demo] KEEP_EMPTY_VALS = false KV_MODE = xml LINE_BREAKER = <\/eqtext:EquipmentEvent>() MAX_TIMESTAMP_LOOKAHEAD = 24 NO_BINARY_CHECK = true SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3f%Z TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TRUNCATE = 100000000 category = Custom disabled = false pulldown_type = true FIELDALIAS-fields_scada_xml = "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.AreaID" AS area "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ElementID" AS element "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.EquipmentID" AS equipment "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ZoneID" AS zone "eqtext:EquipmentEvent.eqtext:ID.eqtext:Description" AS description "eqtext:EquipmentEvent.eqtext:ID.eqtext:MIS_Address" AS mis_address "eqtext:EquipmentEvent.eqtext:Detail.State" AS state "eqtext:EquipmentEvent.eqtext:Detail.eqtext:EventTime" AS event_time "eqtext:EquipmentEvent.eqtext:Detail.eqtext:MsgNr" AS msg_nr "eqtext:EquipmentEvent.eqtext:Detail.eqtext:OperatorID" AS operator_id "eqtext:EquipmentEvent.eqtext:Detail.ErrorType" AS error_type "eqtext:EquipmentEvent.eqtext:Detail.Severity" AS severity ================================= <eqtext:EquipmentEvent xmlns:eqtext="http://vanderlande.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://vanderlande.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://vanderlande.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>8503</AreaID><ZoneID>3</ZoneID><EquipmentID>3</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> LMS not healthy</eqtext:Description><eqtext:MIS_Address>0.3</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-04-02T21:09:38.337Z</eqtext:EventTime><eqtext:MsgNr>4657614997395580315</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>    
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my re... See more...
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my result includes total of 0,1,2,3 etc.. when i go above, 3 days for example - I loose all the data about the 0 value and my results start with 1,2,3 etc..  Anyone could explain this? am I doing something wrong or could this be a bug somewhere? 
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM ... See more...
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM system. Functionality in limited mode. Area: System Management."
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft... See more...
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational index = sysmon is this the correct config ?