All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Look in the add-on's props.conf file to see what sourcetypes are defined.  Then choose the one that matches your data.  Be advised that data arriving via syslog may be in a different format than that... See more...
Look in the add-on's props.conf file to see what sourcetypes are defined.  Then choose the one that matches your data.  Be advised that data arriving via syslog may be in a different format than that fetched by an API so the props may not work as expected.
Throttling works by checking to see if the specified field changed value within the throttle period.  Since _time is always changing it is ineffective as a throttle.
We can't help you with something that can't be done.  ES requires a dedicated SH, but Splunk Cloud trial accounts get a single SH.  That is why ES is not available in trial accounts.
Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purpo... See more...
Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purposes. Here the options I have found: Use the "Splunk OpenTelemetry Collector for Kubernetes" https://docs.splunk.com/Documentation/SVA/current/Architectures/OTelKubernetes Use Cloud facilities to export the logs to Storage Accounts Use Cloud facilities to export the logs to Event Hubs Use Cloud facilities to send syslog to a Log Analytics workspace https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-syslog   references: https://learn.microsoft.com/en-us/azure/azure-monitor/containers/monitor-kubernetes https://learn.microsoft.com/en-us/azure/aks/monitor-aks https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal https://learn.microsoft.com/en-us/azure/architecture/aws-professional/eks-to-aks/monitoring https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview   Is there a way to use Cloud facilities to stream the logs directly to Splunk so that we can avoid deploying the OTEL collector? Otherwise, if we must save the logs first to a Workspace/Storage Accounts/Event Hubs and export them with Splunk via API calls with "Splunk Add-on for Microsoft Cloud Services" or with "Microsoft Azure Add-on for Splunk", which is the best/cost effective approach? Thanks a lot, Edoardo
What predefined templating variables can be used to get below details for a synthetic event,I tried #foreach ($item in ${fullEventList})#end but could get only summary and event details, How can ot... See more...
What predefined templating variables can be used to get below details for a synthetic event,I tried #foreach ($item in ${fullEventList})#end but could get only summary and event details, How can other details be fetched
Hi all, help me extracting the field from the below two events System.Exception: Assertion violated: stream.ReadByteInto(bufferStream) == 0x03 System.Exception: An error was encountered while atte... See more...
Hi all, help me extracting the field from the below two events System.Exception: Assertion violated: stream.ReadByteInto(bufferStream) == 0x03 System.Exception: An error was encountered while attempt to fetch proxy credentials for user 'xyz   system_exception=Assertion violated: stream.ReadByteInto                                      An error was encountered while attempt to fetch proxy credentials for user thanks
Hi @AchimK, You should check input settings inside inputs.conf file. If you created this input using GUI it may be under search app. You can find the place using btool like below. splunk btool inpu... See more...
Hi @AchimK, You should check input settings inside inputs.conf file. If you created this input using GUI it may be under search app. You can find the place using btool like below. splunk btool inputs list tcp --debug | grep 5514 This will show the path of inputs.conf file that you should edit to delete malformed TCP input. 
Hello! I have installed the kemp add-on from here: https://splunkbase.splunk.com/app/6830 . The issue is I cannot find a proper documentation on how to setup data and what sourcetype to specify in t... See more...
Hello! I have installed the kemp add-on from here: https://splunkbase.splunk.com/app/6830 . The issue is I cannot find a proper documentation on how to setup data and what sourcetype to specify in the inputs.conf . For more context, I am collecting the logs through syslog not API, so I need to specify the sourcetype in the inputs.conf for parsing to work properly.
Enterprise security is not available in Splunk cloud trial version. I need assistance for it.
Thank you for your response. Is there are difference between the performance (CPU and memory) data for a UF in internal logs and the logs fetched by splunk add on for windows or splunk add on for un... See more...
Thank you for your response. Is there are difference between the performance (CPU and memory) data for a UF in internal logs and the logs fetched by splunk add on for windows or splunk add on for unix and linux machine?
I have the problem that I can't delete an input filter that I probably formulated incorrectly so that I can take it out. Error occurred attempting to remove a.b.*.*, c.d.e.0, f.g.*:5514: Malformed I... See more...
I have the problem that I can't delete an input filter that I probably formulated incorrectly so that I can take it out. Error occurred attempting to remove a.b.*.*, c.d.e.0, f.g.*:5514: Malformed IP address: a.b.*.*, c.d.e.0, f.g.*:5514. An outputs.conf under /system/local does not exist
Oh yeah I did that. also, I was making use of REPORT instead of TRANSFORM in props.conf this is what worked: Props.conf [source::ping] TRANSFORMS-add_static_fields = mystaticFieldValue Tran... See more...
Oh yeah I did that. also, I was making use of REPORT instead of TRANSFORM in props.conf this is what worked: Props.conf [source::ping] TRANSFORMS-add_static_fields = mystaticFieldValue Transforms.conf [mystaticFieldValue] SOURCE_KEY = _raw WRITE_META = true REGEX = (.*) FORMAT = item::31 Fields.conf [item] INDEXED = true
Hi all, In my AD computer account deletion correlation search, I use _time and subjectusername in throttling fields for grouping. Is adding _time to throttling the correct approach? Please correct m... See more...
Hi all, In my AD computer account deletion correlation search, I use _time and subjectusername in throttling fields for grouping. Is adding _time to throttling the correct approach? Please correct me if I'm wrong. query  index=win sourcetype=XmlWinEventLog EventCode=4743 | bin _time span=5m | stats values(EventCode) as EventCode, values(signature) as EventDescription, values(TargetUserName) as deleted_computer,  dc(TargetUserName) as computeruser_count by _time SubjectUserName | where computeruser_count > 20 Time Range set to  Earliest Time 20m@m latest now cron schedule */15 * * * * Scheduling  set to Continuous Throttling  window duration 12 hours Fields to group by SubjectUserName , _time Thanks in Advance..  
Hi Splunkers, today I have a very strange case to manage. I'm going to try right now to be more clear possible. The scenario is a full on prem Splunk Enterprise environment, with many components. F... See more...
Hi Splunkers, today I have a very strange case to manage. I'm going to try right now to be more clear possible. The scenario is a full on prem Splunk Enterprise environment, with many components. For this customer, we are not the starting provider; another company was on charge before us and developed a full custom app. About this application: No doc has been shared by previous provider It states now some error messages that are not completely clear. So, in a nutshell, we have to try to understand why we got those errors and try to fix them. Now of course I'm not here to ask you "Ehy magic guys, give me the magic solution!"; the purpose of this topic is ask your help to understand data we have (we have only a GUI little dashboard with a short app description and how it works) and try to understand how we can fix those errors. The app analyze Indexers and their indexes. Its purpose is to understand if indexes are retaining the correct amount of historical data; do achieve this, it investigate the index retention status. So, how this investigation is done? The app analyze the currentTimePeriodDay value against the frozenTimePeriodDay. To state if an error is found, the app consider 2 possible cases: currentTimePeriodDay > frozenTimePeriodDay + 45: this case is considered unhealthy because indexes are retaining more historical data than expected currentTimePeriodDay < frozenTimePeriodDay:  this case is considered unhealthy because indexes are retaining insufficient historical data. For both cases, the suggested workaround is a generic retention and disk space settings tuning. Of course there are more specific error message for each index on every Indexers (we have a menu to select specific Indexers) but this, by my point of view, is a further analysis step; what is not clear, for my team and me, is the foundation logic of app. I mean: how comparison between currentTimePeriodDay and frozenTimePeriodDay should help us to check a good index retention? How are they related? Why if one of them is greater than the other one, this could be an unhealthy symptom? 
Hi @jokertothequinn, In order to query custom indexed fields you should add them in fields.conf on search heads; [location] INDEXED=true  
This does create the field. However, it doesn't seems to be a metatag, as the field is not working with tstats for example: |tstats count where index=main location=* by sourcetype   Following e... See more...
This does create the field. However, it doesn't seems to be a metatag, as the field is not working with tstats for example: |tstats count where index=main location=* by sourcetype   Following error appears: When used for 'tstats' searches, the 'WHERE' clause can contain only indexed fields. Ensure all fields in the 'WHERE' clause are indexed. Properly indexed fields should appear in fields.conf.
WORKS! thank you  
Hi Raj I can provide you with a python script which does the extract for you and emails it per License Rule. We use it to get a weekly summary. Please note this only applies to the usage you see ... See more...
Hi Raj I can provide you with a python script which does the extract for you and emails it per License Rule. We use it to get a weekly summary. Please note this only applies to the usage you see on the License Rule page, does not include EUM, Analytics etc. But should be easy to amend the script to add that in You can DM me if you want the script
Why are you formatting the two times before performing your calculation? Subtracting one string from another doesn't give you a number!
Hi. I have to upgrade a splunk environment from Splunk 7.2.4.2 to 9.1. I don't have the option to migrate to a new cluster. The upgrade readiness app is not available for our current version. I know... See more...
Hi. I have to upgrade a splunk environment from Splunk 7.2.4.2 to 9.1. I don't have the option to migrate to a new cluster. The upgrade readiness app is not available for our current version. I know I need to go 7 to 8 and then to 9. In the order of Cluster Master/Indexer Peers..../Search Peers/Deployer/Deployment/UF's and HF's. Can anyone offer any input on what may catch me out in the process?   Thanks