All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We're seeing similar issues with 9.3.2408
Ah great news @uagraw01 ! Im glad you got it resolved  
Hello @Satyams14 , You can follow-up this documentation : https://repost.aws/articles/ARhXA6njHGRzKEXQ20BKO4lA/how-to-integrate-amazon-guardduty-findings-with-on-premises-splunk to integrate same.... See more...
Hello @Satyams14 , You can follow-up this documentation : https://repost.aws/articles/ARhXA6njHGRzKEXQ20BKO4lA/how-to-integrate-amazon-guardduty-findings-with-on-premises-splunk to integrate same. Please mark as solution if this helps you. 
index=ep_log event=created | spath path=properties | mvexpand properties | spath input=properties This query automatically expand fields with every attribute key. 
This works for certain strings, but not others, does whitespace before or after the desired string in the event effect it? If I use the string descried above, this solution works, but with a diffe... See more...
This works for certain strings, but not others, does whitespace before or after the desired string in the event effect it? If I use the string descried above, this solution works, but with a different string it does not work. what gives?
this does not work as I understand it   index="mysearch" log_level=info| spath| search message="*Unit state update from cook client target*" in fact it makes my search much slower, while still n... See more...
this does not work as I understand it   index="mysearch" log_level=info| spath| search message="*Unit state update from cook client target*" in fact it makes my search much slower, while still not yielding any results  
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application... See more...
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application, I need to configure Splunk to just drop these oversized entries. I don't want to reject/truncate all messages, just anything over say, 512k. My understanding is I can do with updates to transform.conf & props.conf? Here's my transforms.conf:   [drop_unwanted_logs] REGEX = (DEBUG|healthcheck|keepalive) # Drop logs containing these terms DEST_KEY = queue FORMAT = nullQueue [drop_large_events] REGEX = ^.{524288,} # Matches any log >= 512 KB DEST_KEY = queue FORMAT = nullQueue     Ideally, I want this to focus on two of my HEC's, so I updated props.conf:   [source::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [source::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288     Am I heading in the right direction? Or, will the following apply to all HEC's?   [sourcetype::httpevent] TRANSFORMS-null=drop_large_events TRUNCATE = 524288    
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_source... See more...
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_sourcetype = change_sourcetype my transforms.conf [change_sourcetype] DESK_KEY = MetaData:Sourcetype SOURCE_KEY = MetaData:Sourcetype REGEX = \,\s+adudit\.admin FORMAT = sourcetype::new:sourcetype     could anyone help?  my sourcetype doesn't change to "new:sourcetype"   thank you
Now i have a new field "attributes" attributes:   {"email": "gacilia@gmail.com", "clients": {"ERP Frontend": "GEgzvJrIJxxHNS9FVdSvUej5wyrBgd2sSHH7RLuE", "Frontend CRM": "ILrkYrSCSsKgdgxBRv0COxKLaOz... See more...
Now i have a new field "attributes" attributes:   {"email": "gacilia@gmail.com", "clients": {"ERP Frontend": "GEgzvJrIJxxHNS9FVdSvUej5wyrBgd2sSHH7RLuE", "Frontend CRM": "ILrkYrSCSsKgdgxBRv0COxKLaOzKufXogzWEAoh8"}, "is_active": false, "last_name": "Gac", "legacy_id": "66f510fea8f5e1ff130f5fa0", "first_name": "Ilia", "start_date": null, "is_team_supervisor": true, "two_factor_auth_enabled": false}
Hello Giuseppe, Thanks for the reply, but when my data looks like this (shared in screenshot) and how do we compare difference with current date values & previous date values for each host values an... See more...
Hello Giuseppe, Thanks for the reply, but when my data looks like this (shared in screenshot) and how do we compare difference with current date values & previous date values for each host values and show it as timechart series:- my query ends with:- | timechart span=1d avg(File_Total) by HOST    
I was told that there is an app that can run the btool command on cloud instances. Does anybody know the name of this app?
Hello Team,   Can someone provide me steps to integrate AWS guardduty logs using Splunk Add-on for AWS. Please do provide me documentation if any.
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_f... See more...
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_from = oldest blacklist1 = EventCode="^(33)" SourceName="^Chrome$"   I have EventCode 256 events in the Application logs under Source Chrome, but I do not see those any of those events in Splunk for some reason. I don't see any errors in the splunkd.log. What could be the reason for this? I would really appreciate insight on this. 
Hi @Daryl.Mercadel, Thanks for asking your question on the Community. Here are some APIs you can look into: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/extend-appdynamics/splunk-appdyna... See more...
Hi @Daryl.Mercadel, Thanks for asking your question on the Community. Here are some APIs you can look into: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/extend-appdynamics/splunk-appdynamics-apis
Try something like this | spath properties | spath input=properties attributes
Hi @Manel.Benabid, Thank you for asking your question on the Community. It seems after a few days the community has not jumped in.  Have you found a solution or any new information you can share... See more...
Hi @Manel.Benabid, Thank you for asking your question on the Community. It seems after a few days the community has not jumped in.  Have you found a solution or any new information you can share here? If you are still needing help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi, I need to ingest some logs into splunk, so file&dirs data input its my choice. Also new index was created , _json as sourcetype. Now im trying to use spath in search to parse JSON data with mul... See more...
Hi, I need to ingest some logs into splunk, so file&dirs data input its my choice. Also new index was created , _json as sourcetype. Now im trying to use spath in search to parse JSON data with multifields and no luck yet. Just checked my json file - valid json. Here we have some parsed json, but i want to get email, first_name,last_name from properties.attributes to be able parse or filter by any of this fields in future   Appreciate any help. Small source file: https://paste2.org/OsEXkgbJ   Here is what i tried : index=ep_log event=created | spath properties.attributes index=erp_log event=created | spath properties and so on
@livehybrid For your information. I have found the solution in Splunk known issues in 9.1.1 version and after applying ; it starts working fine.
@kiran_panchavat Thanks for your response.  My concern is that it worked fine in Splunk Enterprise 8.1.1, but after upgrading to version 9.1.1, I am encountering fatal errors and “bad allocation... See more...
@kiran_panchavat Thanks for your response.  My concern is that it worked fine in Splunk Enterprise 8.1.1, but after upgrading to version 9.1.1, I am encountering fatal errors and “bad allocation” issues for the same scheduled search.
Try this | spath | search message="*Unit state update from cook client target*"