All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @richgalloway As I’ve tried to explain right from the beginning, It has been metric date all the time, why the default defined (event) index name had to be changed to a metric index name, which n... See more...
Hi @richgalloway As I’ve tried to explain right from the beginning, It has been metric date all the time, why the default defined (event) index name had to be changed to a metric index name, which now works as a charm on the HF, so it was all durable and works perfectly. Thanks for all your input- they helped me to focus on the details here All the best
Hi @BogeyMan  I guess the main question is, do you want to drop data > 512k, or just truncate it? If you want to truncate then your TRUNCATE = <n> values should work to truncate to 512k. Your logi... See more...
Hi @BogeyMan  I guess the main question is, do you want to drop data > 512k, or just truncate it? If you want to truncate then your TRUNCATE = <n> values should work to truncate to 512k. Your logic for drop_unwanted_logs also looks good.  I know it might be pseudo-code, for the props.conf, you dont need to specify sourcetype::<yourSourcetype>, its just [<yourSourcetype>] [source::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [source::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype2] TRANSFORMS-null=drop_unwanted_logs TRUNCATE = 524288 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We recently installed Splunk Universal forwarder 9.3.2 on Windows 2019 server. After starting the forwarder I see below error in the splunkd.log. Tried uninstalling and installing the UF but still th... See more...
We recently installed Splunk Universal forwarder 9.3.2 on Windows 2019 server. After starting the forwarder I see below error in the splunkd.log. Tried uninstalling and installing the UF but still the same error. Please let me know how to fix it.   Error  :  02-25-2025 14:52:06.747 -0600 WARN TcpOutputProc [12132 parsing] - The TCP output processor has paused the data flow.  Forwarding to host_dest=(ip of indexer) inside output group splunkcloud_ from host_src=(ip folder source) has been blocked for blocked_seconds=5600. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.  
We're seeing similar issues with 9.3.2408
We're seeing similar issues with 9.3.2408
Ah great news @uagraw01 ! Im glad you got it resolved  
Hello @Satyams14 , You can follow-up this documentation : https://repost.aws/articles/ARhXA6njHGRzKEXQ20BKO4lA/how-to-integrate-amazon-guardduty-findings-with-on-premises-splunk to integrate same.... See more...
Hello @Satyams14 , You can follow-up this documentation : https://repost.aws/articles/ARhXA6njHGRzKEXQ20BKO4lA/how-to-integrate-amazon-guardduty-findings-with-on-premises-splunk to integrate same. Please mark as solution if this helps you. 
index=ep_log event=created | spath path=properties | mvexpand properties | spath input=properties This query automatically expand fields with every attribute key. 
This works for certain strings, but not others, does whitespace before or after the desired string in the event effect it? If I use the string descried above, this solution works, but with a diffe... See more...
This works for certain strings, but not others, does whitespace before or after the desired string in the event effect it? If I use the string descried above, this solution works, but with a different string it does not work. what gives?
this does not work as I understand it   index="mysearch" log_level=info| spath| search message="*Unit state update from cook client target*" in fact it makes my search much slower, while still n... See more...
this does not work as I understand it   index="mysearch" log_level=info| spath| search message="*Unit state update from cook client target*" in fact it makes my search much slower, while still not yielding any results  
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application... See more...
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application, I need to configure Splunk to just drop these oversized entries. I don't want to reject/truncate all messages, just anything over say, 512k. My understanding is I can do with updates to transform.conf & props.conf? Here's my transforms.conf:   [drop_unwanted_logs] REGEX = (DEBUG|healthcheck|keepalive) # Drop logs containing these terms DEST_KEY = queue FORMAT = nullQueue [drop_large_events] REGEX = ^.{524288,} # Matches any log >= 512 KB DEST_KEY = queue FORMAT = nullQueue     Ideally, I want this to focus on two of my HEC's, so I updated props.conf:   [source::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [source::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288     Am I heading in the right direction? Or, will the following apply to all HEC's?   [sourcetype::httpevent] TRANSFORMS-null=drop_large_events TRUNCATE = 524288    
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_source... See more...
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_sourcetype = change_sourcetype my transforms.conf [change_sourcetype] DESK_KEY = MetaData:Sourcetype SOURCE_KEY = MetaData:Sourcetype REGEX = \,\s+adudit\.admin FORMAT = sourcetype::new:sourcetype     could anyone help?  my sourcetype doesn't change to "new:sourcetype"   thank you
Now i have a new field "attributes" attributes:   {"email": "gacilia@gmail.com", "clients": {"ERP Frontend": "GEgzvJrIJxxHNS9FVdSvUej5wyrBgd2sSHH7RLuE", "Frontend CRM": "ILrkYrSCSsKgdgxBRv0COxKLaOz... See more...
Now i have a new field "attributes" attributes:   {"email": "gacilia@gmail.com", "clients": {"ERP Frontend": "GEgzvJrIJxxHNS9FVdSvUej5wyrBgd2sSHH7RLuE", "Frontend CRM": "ILrkYrSCSsKgdgxBRv0COxKLaOzKufXogzWEAoh8"}, "is_active": false, "last_name": "Gac", "legacy_id": "66f510fea8f5e1ff130f5fa0", "first_name": "Ilia", "start_date": null, "is_team_supervisor": true, "two_factor_auth_enabled": false}
Hello Giuseppe, Thanks for the reply, but when my data looks like this (shared in screenshot) and how do we compare difference with current date values & previous date values for each host values an... See more...
Hello Giuseppe, Thanks for the reply, but when my data looks like this (shared in screenshot) and how do we compare difference with current date values & previous date values for each host values and show it as timechart series:- my query ends with:- | timechart span=1d avg(File_Total) by HOST    
I was told that there is an app that can run the btool command on cloud instances. Does anybody know the name of this app?
Hello Team,   Can someone provide me steps to integrate AWS guardduty logs using Splunk Add-on for AWS. Please do provide me documentation if any.
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_f... See more...
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_from = oldest blacklist1 = EventCode="^(33)" SourceName="^Chrome$"   I have EventCode 256 events in the Application logs under Source Chrome, but I do not see those any of those events in Splunk for some reason. I don't see any errors in the splunkd.log. What could be the reason for this? I would really appreciate insight on this. 
Hi @Daryl.Mercadel, Thanks for asking your question on the Community. Here are some APIs you can look into: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/extend-appdynamics/splunk-appdyna... See more...
Hi @Daryl.Mercadel, Thanks for asking your question on the Community. Here are some APIs you can look into: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/extend-appdynamics/splunk-appdynamics-apis
Try something like this | spath properties | spath input=properties attributes
Hi @Manel.Benabid, Thank you for asking your question on the Community. It seems after a few days the community has not jumped in.  Have you found a solution or any new information you can share... See more...
Hi @Manel.Benabid, Thank you for asking your question on the Community. It seems after a few days the community has not jumped in.  Have you found a solution or any new information you can share here? If you are still needing help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support?