All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as t... See more...
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as the log format (there are some other formats as well); I will choose "Splunk". I need to know how to configure Splunk Enterprise to receive encrypted traffic from Check Point if I use TLS at the Check Point to send encrypted traffic to Splunk. Can someone enlighten me on this please?   Thanks!
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effecti... See more...
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effective daily volume was set from 6144MB to 1024MB instead of 3072MB Does anyone know why it is not taking the 3 licenses correctly? even removing the 3 expired ones is limited to 1024MB only Even if you restart Splunk it's the same result
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Fir... See more...
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Firstly I would simply like to change the labels linked to Username and Password to replace them with Client ID and Client Secret. (and secondly add the Tenant ID field). I achieved this by editing the file in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static/js/build/globalConfig.json. Then I incremented the version number in the app properties. (as shown in this post https://community.splunk.com/t5/Getting-Data-In/Splunk-Add-on-Builder-Global-Account-settings/m-p/570565) However when I make new modifications elsewhere in my app, the globalConfig.json file is reset to its default values. Do you know how to do this? Splunk Version : 9.2.1 Add-On Builder version : 4.3.0 Thanks
The reason why it's naming the series 2023 is that the current month is now November 2024, so it's wrapping by 12 months, so the first series is Dec 2023->Nov-2024 - even though you are only searchin... See more...
The reason why it's naming the series 2023 is that the current month is now November 2024, so it's wrapping by 12 months, so the first series is Dec 2023->Nov-2024 - even though you are only searching for data in the current year, the timewrap command will work out the series name based on your timewrap span of 1y.  If you made the search with earliest=@y latest=+y@y, which is searching from 2024-01-01 to 2024-12-31 it will label the series correctly as 2024. So, it's just a function of timewrap. You can see this more clearly if you set your time_format to include the month, i.e. time_format=%Y-%m - then you will get and if you change your series=exact to relative, you will see it's 'latest_year', which means a 12 month period. Hope this helps  
Any subsearch will have a limit - there is a way to combine two datasets in lookups without using append, e.g. | inputlookup file1.csv | inputlookup append=t file2.csv using append=t on the second ... See more...
Any subsearch will have a limit - there is a way to combine two datasets in lookups without using append, e.g. | inputlookup file1.csv | inputlookup append=t file2.csv using append=t on the second inputlookup does NOT have a subsearch limitation. Without knowing what you are doing in more detail, it's impossible to suggest a solution, however, even though you are using commands such as mvexpand, it is generally possible to do a single search (index=A OR index=B) method and then manipulate the result set to get what you want.
Yes if you are using SCP then ACS is your selection to do this. There is also a Terraform connector to do this kind of stuff if that is familiar tool for you. And if you are partner then there is a... See more...
Yes if you are using SCP then ACS is your selection to do this. There is also a Terraform connector to do this kind of stuff if that is familiar tool for you. And if you are partner then there is a presentation kept couple of years ago in GPS which give you a excellent framework to manage Clients SCP environments.
I had the same issue today on a fresh splunk installation. I solved this after doing the following: Install java 11 on my server Configure the app.conf file under the local folder of the DB Connec... See more...
I had the same issue today on a fresh splunk installation. I solved this after doing the following: Install java 11 on my server Configure the app.conf file under the local folder of the DB Connect application (\etc\apps\splunk_app_db_connect\local)       [install] is_configured = 1       Under the same folder, create the dbx_settings.conf file with the following:       [java] javaHome = C:\Program Files\Java\jdk-11       I'm kinda sure all I needed was the is_configured set to 1. Please try and validate this.
This should mater as _time didn't get value from c_time or Time. Basically those lines are not needed. Unless there is some weird alias in props.conf or something which put e.g. Time in _time field? ... See more...
This should mater as _time didn't get value from c_time or Time. Basically those lines are not needed. Unless there is some weird alias in props.conf or something which put e.g. Time in _time field? You should try to find where in this Dashboard is something which are manipulating _time based on c_time or Time field.
Hi here is one old post which contains link to one python script to do this. https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p... See more...
Hi here is one old post which contains link to one python script to do this. https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p/672741/highlight/true#M55102 As it already said you can select all those objects under Reassign Knowledge Objects.  After that it gives to you possibility to do bulk reassign for those. Usually this work as expected, but time by time You cannot change all those KOs with GUI. Then just use previously mentioned python script and it do the rest. r. Ismo
Hi 1st as you are using UDP as transmit protocol you will definitely lost events. You cannot do anything for it as it due to that protocol. You should build separate syslog cluster with VIP address... See more...
Hi 1st as you are using UDP as transmit protocol you will definitely lost events. You cannot do anything for it as it due to that protocol. You should build separate syslog cluster with VIP address and then send syslog events from those backends to splunk. Both rsyslog and syslog-ng are suitable for that. If you haven't enough experience about syslog server then probably the easiest way to achieve this is use Splunk's SC4S. You could find it from https://splunk.github.io/splunk-connect-for-syslog/main/ https://splunkbase.splunk.com/app/4740 There is also some .conf presentation about it. Probably 2020 (or 2019)? And never use any HF or indexer as terminating TCP/UDP syslog feed with Splunk. Use always separate syslog server. r. Ismo
I'm sorry, I think I put it in the wrong place. We're using Splunk Cloud, so this solution (ACS) will probably work. I'll update when I worked on it to confirm it works for my needs.
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI ... See more...
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI access into node and then you can script it. E.g. ansible is good tool to manage installations. You could have control node where you get packages/apps from git and then install those with ansible-play.
@gcusello have you try to add _meta tag in your HF/UF's inputs.conf and put that information there? I think that this could solve your needs?
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there ... See more...
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there any messages for it in _internal? btw when you post logs, please use block element </> where you paste those lines. It's much easier to read and we can be sure that those are what you have pasted. If the connection between master and peer is working there are lot of messages in _internal.
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation ... See more...
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation is missing, so I cannot tell what it do! extract hostname from event and save it into metadata to use on next step define used index based on hostname (fqdn) on event. Fqdn vs index is defined on that csv lookup file Change \r\n newline to just \n  Don't generate punctuation for event More detailed information from those links which @PaulPanther add in his post. r. Ismo
Sorry for the late reply. As I've changed my mail over the years, I don't receive email notifications from replies. Here's the app: https://github.com/skalliger/encryption_and_vulnerability_check  
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashb... See more...
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashboard ID, and so fourth, it makes searches really difficult...How am I to correlate the ID to readable relevant information.? Where User_ID equates to Username (Davey Jones)? Help Please. 
Hi @Alex.Nyago, Thanks for asking your question on the Community. Did you happen to find a solution or any new information you can share here? If you still need help, please contact AppDynamics ... See more...
Hi @Alex.Nyago, Thanks for asking your question on the Community. Did you happen to find a solution or any new information you can share here? If you still need help, please contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Hi @Hector.Arredondo, I reached out to your AppDynamics CSM. They should be in touch with you to talk more about this. 
Then you'll want to look at the schedule setting, which defaults to running the script at startup.