All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This may also be useful, you can let the already logged files remain and just refine the inputs conf to exclude the logs you don't want monitored and ingested. https://community.splunk.com/t5/Gett... See more...
This may also be useful, you can let the already logged files remain and just refine the inputs conf to exclude the logs you don't want monitored and ingested. https://community.splunk.com/t5/Getting-Data-In/How-to-blacklist-inputs-conf/td-p/598999 In addition to PickleRick's DOC suggestion:  https://docs.splunk.com/Documentation/Splunk/latest/Admin/InputsConf#MONITOR:    
Solution/workaround was to comment out the user_account_control_property lines in the default transforms/props files. 
Replying to ancient thread. Has anyone had success getting this plugin to talk to regular ldap like FreeIPA? It seems like feeding it a distinguishedName attribute would maybe work.
Hello @., I asked around and this is what I heard back. If the controller is 24.6+ and you disable the agent, it should disappear within a day or two. You have to disable (or uninstall) the agent... See more...
Hello @., I asked around and this is what I heard back. If the controller is 24.6+ and you disable the agent, it should disappear within a day or two. You have to disable (or uninstall) the agent from the host(s). Please let me know if this helps out.
I have the same issue. We use Splunk Cloud, and the permissions are fine. I did not uninstall and reinstall, as I'm not sure of the full ramifications of that. I don't know if it's related or not, b... See more...
I have the same issue. We use Splunk Cloud, and the permissions are fine. I did not uninstall and reinstall, as I'm not sure of the full ramifications of that. I don't know if it's related or not, but I noticed it after I installed the latest version from Splunkbase. 
So, in my organization, I have have created many dashboards, but I want to know if they actually view them and how often and which roles are using/viewing them. Is is possible to get these stats from... See more...
So, in my organization, I have have created many dashboards, but I want to know if they actually view them and how often and which roles are using/viewing them. Is is possible to get these stats from dashboards? This will be very helpful for me and my team in the future.    Thank you. 
I am new to Splunk and I have been tasked to setup management and data traffic to use SSL certificates. A colleague installed Splunk 9.2.1 on Windows 2022 server on a separate application drive. A do... See more...
I am new to Splunk and I have been tasked to setup management and data traffic to use SSL certificates. A colleague installed Splunk 9.2.1 on Windows 2022 server on a separate application drive. A document I found on the Splunk documents site "How to obtain certificates from a third-party for inter-Splunk communication". The commands use environment variables that are not setup on my server. Questions: 1. Where these variables supposed to be added during the install? 2. If not which variables do I need to add and where do I add them (user or system) variables? 3. Is there a major difference in configuration if Splunk is installed to an application drive not the O/S drive? 4. In generating the privatekey.key file is it supposed to be saved in the same folder as the servercertificate.csr?  
Usually it’s best to create a new question instead of add question to old solved one!  Here is one conf presentation about TLS stuff https://conf.splunk.com/files/2023/slides/SEC1936B.pdf
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using envi... See more...
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using environment variables and I don't see any on the server a colleague previously installed Splunk 9.0.x on. Do I need to manually add system variables or should the installer have done that during the install? If have need to add them what are they, where do they get installed (user or system), where do they point too? 
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so t... See more...
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so that adds latency.
Inputs settings can determine which files to monitor, but cannot filter events out of monitored files.  To do that, you need to use props and (optionally) transforms on an indexer or heavy forwarder.
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When y... See more...
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When you adding HF between UF and indexer you always add complexity, latency in your installation. Also in most cases you also reduce event distribution on indexer sides which decrease your search performance. Using HFs instead of UF will also generate more traffic between sites as HF's add some metadata on all events. Based on what you have told, I don't see that this separation will lead to your objectives, instead it do just opposite result. But if you still want to do it, then you should change at least next part connect those HF's directly to your main splunk, not West -> East -> Indexers add more HFs on both site to get redundancy and better event distribution and performance add more pipelines in every HFs to get better performance and event distribution on indexers r. Ismo
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard, using a splunk API call. From what i saw online, i should use : GET /services/pdfgen/render?input-dashboard=<DashboardName>&namespace=<AppName>&paper-size=a4-landscape     user = MyUser; app = MyApp; dashboard = Security_events_dashboard (i'm using a module that calls the API for me, all i do is precise the endpoint and the parameters and it gives me the response as a string) The problem is that i get this error :   Unable to render PDF.<br/><ul><il>Bailing out of Integrated PDF Generation. Exception raised while preparing to render "Untitled" to PDF. [HTTP 404] https://localhost:8089/servicesNS/MyUser/MyApp/data/ui/views/Security_events_dashboard; [{'type': 'ERROR', 'code': None, 'text': 'Could not find object id=Security_events_dashboard'}]</li></ul>     On th GUI, signed as MyUser, i can see the dashboard under MyApp, and the permission is set to read for MyUser, Owner = nobody, Sharing = App. To confirm this, on my Search-Head VM i can see the dashboard under $SPLUNK_HOME/etc/apps/MyApp/default/data/ui/views/security_events_dashboard.xml. Plus in $SPLUNK_HOME/etc/apps/MyApp/metadata/default.meta :   [views/security_events_dashboard.xml] access = read : [MyUser], write : [admin] owner = nobody version = 9.1.0.1 export = system     I've tried using the dashboard name as security_events_dashboard (instead of Security_events_dashboard) but i get the same error. I don't see what i'm missing here, si if anyone could give me hint or two please, thank you
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud envi... See more...
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud environments and on-prem environments (which where connecting ok to PostgreSQL while on Phantom 5.X versions). This is the error we are getting on-prem (the very same happens on cloud enviroments with automation broker).   Testing Connectivity App 'PostgreSQL' started successfully (id: 1723042384532) on asset: 'pgdb'(id: 433) Loaded action execution configuration db login error SCRAM authentication requires libpq version 10 or above Test Connectivity Failed. PostgresqlConnector::initialize() returned error. I already opened a suport ticket weeks ago but maybe some of you were able to solve it on your own. Any ideas about the root cause and possible solutions? Regards
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools... See more...
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools.jar is not in the JVM classpath any help to resolve this is appreciated. 
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4... See more...
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4 Is what I am working with and have not had success    
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use,... See more...
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use, you should use UTC (unix time) as @PickleRick already propose. Remember to convert your local time to UTC before you do that query. But please remember that Splunk store all event's in UTC time and shows those based on your current TZ definition (defined in User Preferences in your Splunk GUI).
I am also here from the future to say that this still works in 2024!
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those ... See more...
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those are defined.
Do you really have 3 different indexers which each contains own indexes like 1st card, 2nd bank and 3rd error indexes? Or do you have one indexer (or cluster) which contains all those separate indexes?