All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Usually it’s best to create a new question instead of add question to old solved one!  Here is one conf presentation about TLS stuff https://conf.splunk.com/files/2023/slides/SEC1936B.pdf
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using envi... See more...
From the post I'm assuming the install is installed on the default C: drive? What changes to this process need to change if Splunk was installed on a different drive? Several commands are using environment variables and I don't see any on the server a colleague previously installed Splunk 9.0.x on. Do I need to manually add system variables or should the installer have done that during the install? If have need to add them what are they, where do they get installed (user or system), where do they point too? 
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so t... See more...
Adding intermediate forwarders introduces complexity and improves neither integrity nor latency.  Loss of one of the HFs means half of the UFs are off-line. The HFs need time to process events so that adds latency.
Inputs settings can determine which files to monitor, but cannot filter events out of monitored files.  To do that, you need to use props and (optionally) transforms on an indexer or heavy forwarder.
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When y... See more...
Hi The splunk best practices is use UFs and sends logs directly into indexers. Of course there are some cases where it's best to use also HFs between UF and indexers, but not in normal case. When you adding HF between UF and indexer you always add complexity, latency in your installation. Also in most cases you also reduce event distribution on indexer sides which decrease your search performance. Using HFs instead of UF will also generate more traffic between sites as HF's add some metadata on all events. Based on what you have told, I don't see that this separation will lead to your objectives, instead it do just opposite result. But if you still want to do it, then you should change at least next part connect those HF's directly to your main splunk, not West -> East -> Indexers add more HFs on both site to get redundancy and better event distribution and performance add more pipelines in every HFs to get better performance and event distribution on indexers r. Ismo
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard, using a splunk API call. From what i saw online, i should use : GET /services/pdfgen/render?input-dashboard=<DashboardName>&namespace=<AppName>&paper-size=a4-landscape     user = MyUser; app = MyApp; dashboard = Security_events_dashboard (i'm using a module that calls the API for me, all i do is precise the endpoint and the parameters and it gives me the response as a string) The problem is that i get this error :   Unable to render PDF.<br/><ul><il>Bailing out of Integrated PDF Generation. Exception raised while preparing to render "Untitled" to PDF. [HTTP 404] https://localhost:8089/servicesNS/MyUser/MyApp/data/ui/views/Security_events_dashboard; [{'type': 'ERROR', 'code': None, 'text': 'Could not find object id=Security_events_dashboard'}]</li></ul>     On th GUI, signed as MyUser, i can see the dashboard under MyApp, and the permission is set to read for MyUser, Owner = nobody, Sharing = App. To confirm this, on my Search-Head VM i can see the dashboard under $SPLUNK_HOME/etc/apps/MyApp/default/data/ui/views/security_events_dashboard.xml. Plus in $SPLUNK_HOME/etc/apps/MyApp/metadata/default.meta :   [views/security_events_dashboard.xml] access = read : [MyUser], write : [admin] owner = nobody version = 9.1.0.1 export = system     I've tried using the dashboard name as security_events_dashboard (instead of Security_events_dashboard) but i get the same error. I don't see what i'm missing here, si if anyone could give me hint or two please, thank you
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud envi... See more...
Hi We are being unable to use the PostgreSQL connector since updating to SOAR version 6, either with the last connector version or with previous the ones. This issue is happening both on cloud environments and on-prem environments (which where connecting ok to PostgreSQL while on Phantom 5.X versions). This is the error we are getting on-prem (the very same happens on cloud enviroments with automation broker).   Testing Connectivity App 'PostgreSQL' started successfully (id: 1723042384532) on asset: 'pgdb'(id: 433) Loaded action execution configuration db login error SCRAM authentication requires libpq version 10 or above Test Connectivity Failed. PostgresqlConnector::initialize() returned error. I already opened a suport ticket weeks ago but maybe some of you were able to solve it on your own. Any ideas about the root cause and possible solutions? Regards
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools... See more...
Hi,    We have installed AppDynamics and we are using Oracle JVM and added tools.jar under tomcat lib, and java JRE lib also but when we try to check on object tracking we are still seeing  tools.jar is not in the JVM classpath any help to resolve this is appreciated. 
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4... See more...
Trying to update the props/transform.conf so that I can created fields for the items listed on the left side of the image below.   FIELD_DELIMITER=: FIELD_NAMES=myfield1,myfield2,myfield3,myfield4 Is what I am working with and have not had success    
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use,... See more...
You probably have already read previous doc? Here is another one https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsearchtimeranges. To be sure that you have correct time span in use, you should use UTC (unix time) as @PickleRick already propose. Remember to convert your local time to UTC before you do that query. But please remember that Splunk store all event's in UTC time and shows those based on your current TZ definition (defined in User Preferences in your Splunk GUI).
I am also here from the future to say that this still works in 2024!
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those ... See more...
I propose the last option. But in 1st phase it could be easier to find differences without --debug option (this shows where those are defined). After you know those differences then look where those are defined.
Do you really have 3 different indexers which each contains own indexes like 1st card, 2nd bank and 3rd error indexes? Or do you have one indexer (or cluster) which contains all those separate indexes?
Hi As other already said, you can do it. BUT 1st you should think is this really needed or not? Or is there better way to do it? How much data you have on this old server? is it better to migrate t... See more...
Hi As other already said, you can do it. BUT 1st you should think is this really needed or not? Or is there better way to do it? How much data you have on this old server? is it better to migrate that data to a new indexer cluster node and then migrate that node into indexer cluster. Also are you really need SHC as you have only two node indexer cluster? Usually SHC creates more complexity than just use one bigger SH. Of course your use case could need it, but then it also needs more bigger indexer cluster and probably that should be multisite cluster instead of standard cluster. I propose that you contact to some local Splunk Partner or PS service and plan your environment based on your real use case and needs. r. Ismo
will the Searchbase App (https://splunkbase.splunk.com/app/7188) ever be made available for general download?
Here is Splunk's own Search Tutorial https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial which contains some example data sets. It's easiest to use it. Of cou... See more...
Here is Splunk's own Search Tutorial https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial which contains some example data sets. It's easiest to use it. Of course you could use your own datasets, but then you must understand how to get data into splunk https://docs.splunk.com/Documentation/Splunk/latest/Data/WhatSplunkcanmonitor. r. Ismo
Hello Everyone,  looking for a little guidance on our Splunk deployment for a system.  Currently, we have a few different sites that span across the US with Universal forwarders deployed to all of th... See more...
Hello Everyone,  looking for a little guidance on our Splunk deployment for a system.  Currently, we have a few different sites that span across the US with Universal forwarders deployed to all of the systems and reporting back to one main Splunk instance individually.   Id like to see about splitting the Splunk system up into two separate parts to improve integrity and reduce latency, but have never delt with deploying a heavy forwarder in an instance like this.  My thought is to have all of the western universal forwarders sending their events to a dedicated Western Heavy forwarder, and have all of the eastern universal forwards send their data to the eastern heavy forwarder, and have both of the heavy forwarders send their data to our main Splunk instance.  (crude visio below) Any guidance is greatly appreciated!   
You should create support/bug report to splunk support. Maybe they see this same way and add this to fix list?
Just some quick points: --fields are specific to the index they are apart of, they may not exist across indexes (though this is less problematic if your data is properly normalized.) --have you d... See more...
Just some quick points: --fields are specific to the index they are apart of, they may not exist across indexes (though this is less problematic if your data is properly normalized.) --have you done the queries to interrogate the data, as it exists, in your environment and identified which indexes and fields you want exist as needed (cf. PickleRick's comment about use case)? Some other, hopefully relevant, thoughts (and which, from my experience, are sometimes useful in providing food for thought and context--especially for greener admins like me.) To add to the other relevant responses, and more generally in working with Splunk, and this does depend on whether you are a user or an admin (and even here this may mean different things depending on your organization) and trying to craft queries about data in your environment, it matters how you are configuring the ingestion (including, depending, the related architecture, like if there is a syslog server, or some TA needed), setting up the indexes, configuring what counts as a source (a hint that there is a ton of customizability to Splunk), setting a schema for hostnames , either auto-extracted during ingestion or otherwise configured in a CONF or using a look-up. Because there is a fairly large degree of customizability and arbitrariness in configurations (which may simply more reflect your environment (and its architecture), what your business wants/needs, etc.), what is being ingested, how it is labeled (are you specifying this, setting a schema, or just letting a TA or Splunk figure it out), whether there are standards for anything (internal to your organization or company policy etc.), it can sometimes be hard to give specific advice outside of you spelling out all of the particulars. In your case, some better sense of what is indexed, tagged and what fields are available, per index, since the fields exist inside of the index, per source, rather than necessarily being standard (which is helped by following CIM normalization best practices), will help you enormously in taking care of tasks like this. The larger idea here is to be kind to your future self and to others who have to interact and admin your Splunk environment, follow best practices that make these tasks easier. REFs: https://docs.splunk.com/Documentation/CIM/5.3.2/User/UsetheCIMtonormalizedataatsearchtime https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Getstartedwithgettingdatain https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Aboutindexedfieldextraction https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-splunk-to-create-a-CMDB-like-table-of-asset-info/m-p/657338 https://splunkbase.splunk.com/    
Have you check this https://docs.splunk.com/Documentation/Splunk/9.3.0/DashStudio/inputs ?