All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So, in my organization, I have have created many dashboards, but I want to know if they actually view them and how often and which roles are using/viewing them. Is is possible to get these stats from... See more...
So, in my organization, I have have created many dashboards, but I want to know if they actually view them and how often and which roles are using/viewing them. Is is possible to get these stats from dashboards? This will be very helpful for me and my team in the future.    Thank you. 
I am new to Splunk and I have been tasked to setup management and data traffic to use SSL certificates. A colleague installed Splunk 9.2.1 on Windows 2022 server on a separate application drive. A do... See more...
I am new to Splunk and I have been tasked to setup management and data traffic to use SSL certificates. A colleague installed Splunk 9.2.1 on Windows 2022 server on a separate application drive. A document I found on the Splunk documents site "How to obtain certificates from a third-party for inter-Splunk communication". The commands use environment variables that are not setup on my server. Questions: 1. Where these variables supposed to be added during the install? 2. If not which variables do I need to add and where do I add them (user or system) variables? 3. Is there a major difference in configuration if Splunk is installed to an application drive not the O/S drive? 4. In generating the privatekey.key file is it supposed to be saved in the same folder as the servercertificate.csr?  
Usually it’s best to create a new question instead of add question to old solved one!  Here is one conf presentation about TLS stuff https://conf.splunk.com/files/2023/slides/SEC1936B.pdf
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using envi... See more...
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using environment variables and I don't see any on the server a colleague previously installed Splunk 9.0.x on. Do I need to manually add system variables or should the installer have done that during the install? If have need to add them what are they, where do they get installed (user or system), where do they point too? 
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so t... See more...
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so that adds latency.
Inputs settings can determine which files to monitor, but cannot filter events out of monitored files.  To do that, you need to use props and (optionally) transforms on an indexer or heavy forwarder.
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When y... See more...
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When you adding HF between UF and indexer you always add complexity, latency in your installation. Also in most cases you also reduce event distribution on indexer sides which decrease your search performance. Using HFs instead of UF will also generate more traffic between sites as HF's add some metadata on all events. Based on what you have told, I don't see that this separation will lead to your objectives, instead it do just opposite result. But if you still want to do it, then you should change at least next part connect those HF's directly to your main splunk, not West -> East -> Indexers add more HFs on both site to get redundancy and better event distribution and performance add more pipelines in every HFs to get better performance and event distribution on indexers r. Ismo
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard, using a splunk API call. From what i saw online, i should use : GET /services/pdfgen/render?input-dashboard=<DashboardName>&namespace=<AppName>&paper-size=a4-landscape     user = MyUser; app = MyApp; dashboard = Security_events_dashboard (i'm using a module that calls the API for me, all i do is precise the endpoint and the parameters and it gives me the response as a string) The problem is that i get this error :   Unable to render PDF.<br/><ul><il>Bailing out of Integrated PDF Generation. Exception raised while preparing to render "Untitled" to PDF. [HTTP 404] https://localhost:8089/servicesNS/MyUser/MyApp/data/ui/views/Security_events_dashboard; [{'type': 'ERROR', 'code': None, 'text': 'Could not find object id=Security_events_dashboard'}]</li></ul>     On th GUI, signed as MyUser, i can see the dashboard under MyApp, and the permission is set to read for MyUser, Owner = nobody, Sharing = App. To confirm this, on my Search-Head VM i can see the dashboard under $SPLUNK_HOME/etc/apps/MyApp/default/data/ui/views/security_events_dashboard.xml. Plus in $SPLUNK_HOME/etc/apps/MyApp/metadata/default.meta :   [views/security_events_dashboard.xml] access = read : [MyUser], write : [admin] owner = nobody version = 9.1.0.1 export = system     I've tried using the dashboard name as security_events_dashboard (instead of Security_events_dashboard) but i get the same error. I don't see what i'm missing here, si if anyone could give me hint or two please, thank you
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud envi... See more...
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud environments and on-prem environments (which where connecting ok to PostgreSQL while on Phantom 5.X versions). This is the error we are getting on-prem (the very same happens on cloud enviroments with automation broker).   Testing Connectivity App 'PostgreSQL' started successfully (id: 1723042384532) on asset: 'pgdb'(id: 433) Loaded action execution configuration db login error SCRAM authentication requires libpq version 10 or above Test Connectivity Failed. PostgresqlConnector::initialize() returned error. I already opened a suport ticket weeks ago but maybe some of you were able to solve it on your own. Any ideas about the root cause and possible solutions? Regards
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools... See more...
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools.jar is not in the JVM classpath any help to resolve this is appreciated. 
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4... See more...
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4 Is what I am working with and have not had success    
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use,... See more...
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use, you should use UTC (unix time) as @PickleRick already propose. Remember to convert your local time to UTC before you do that query. But please remember that Splunk store all event's in UTC time and shows those based on your current TZ definition (defined in User Preferences in your Splunk GUI).
I am also here from the future to say that this still works in 2024!
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those ... See more...
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those are defined.
Do you really have 3 different indexers which each contains own indexes like 1st card, 2nd bank and 3rd error indexes? Or do you have one indexer (or cluster) which contains all those separate indexes?
Hi As other already said, you can do it. BUT 1st you should think is this really needed or not? Or is there better way to do it? How much data you have on this old server? is it better to migrate t... See more...
Hi As other already said, you can do it. BUT 1st you should think is this really needed or not? Or is there better way to do it? How much data you have on this old server? is it better to migrate that data to a new indexer cluster node and then migrate that node into indexer cluster. Also are you really need SHC as you have only two node indexer cluster? Usually SHC creates more complexity than just use one bigger SH. Of course your use case could need it, but then it also needs more bigger indexer cluster and probably that should be multisite cluster instead of standard cluster. I propose that you contact to some local Splunk Partner or PS service and plan your environment based on your real use case and needs. r. Ismo
will the Searchbase App (https://splunkbase.splunk.com/app/7188) ever be made available for general download?
Here is Splunk's own Search Tutorial https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial which contains some example data sets. It's easiest to use it. Of cou... See more...
Here is Splunk's own Search Tutorial https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial which contains some example data sets. It's easiest to use it. Of course you could use your own datasets, but then you must understand how to get data into splunk https://docs.splunk.com/Documentation/Splunk/latest/Data/WhatSplunkcanmonitor. r. Ismo
Hello Everyone,  looking for a little guidance on our Splunk deployment for a system.  Currently, we have a few different sites that span across the US with Universal forwarders deployed to all of th... See more...
Hello Everyone,  looking for a little guidance on our Splunk deployment for a system.  Currently, we have a few different sites that span across the US with Universal forwarders deployed to all of the systems and reporting back to one main Splunk instance individually.   Id like to see about splitting the Splunk system up into two separate parts to improve integrity and reduce latency, but have never delt with deploying a heavy forwarder in an instance like this.  My thought is to have all of the western universal forwarders sending their events to a dedicated Western Heavy forwarder, and have all of the eastern universal forwards send their data to the eastern heavy forwarder, and have both of the heavy forwarders send their data to our main Splunk instance.  (crude visio below) Any guidance is greatly appreciated!   
You should create support/bug report to splunk support. Maybe they see this same way and add this to fix list?