All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jbv, the default Splunk Home Page is changed in the last versions, but anyway, you can see the available data sources with a simple search like index=* in addition in the Search & Reporting App... See more...
Hi @jbv, the default Splunk Home Page is changed in the last versions, but anyway, you can see the available data sources with a simple search like index=* in addition in the Search & Reporting App, you have the Search history with the list of all last searches that you runned in your Splunk and a new feature (called Table View) created for the people with few Splunk knowledge. Ciao. Giuseppe
Hi Basically you must be a paid customer for ES and named as contact person for downloading license or as @gcusello said you must be a splunk partner with assosiate (or higher) level and have fulfil ... See more...
Hi Basically you must be a paid customer for ES and named as contact person for downloading license or as @gcusello said you must be a splunk partner with assosiate (or higher) level and have fulfil NFR license requirements to get it. The last option could be that you ask it for PoC from Splunk/Splunk partner, but as it's quite complicated to set up correctly, it's better to ask that Splunk partner will do it to you. Also show.splunk.com need to be a partner to get access into it. r. Ismo
Adding to what @gcusello and @richgalloway already said, if it's a standard Splunk-supported app (I suppose by TA_Linux you mean the TA_nix but I can't be 100% sure), it will have its own docs page s... See more...
Adding to what @gcusello and @richgalloway already said, if it's a standard Splunk-supported app (I suppose by TA_Linux you mean the TA_nix but I can't be 100% sure), it will have its own docs page saying on which components it should/can be installed. If it's a third-party supplied independently written app it might have such doc page as well. Generally speaking, Splunk apps contain settings which can be active on various components (either in search-time or in index-time) but if an app is properly written (and as far as I remember, there are checks which make sure that you can upload to Splunkbase a badly written app; at least badly written in this context), you can typically deploy your app on all tiers and each tier will only "use" the part of the app which applies to said tier. So your app may contain: 1) Input/output definitions - in an Splunkbase-supplied app they will be set as disabled by default; you have to explicitly enable them so if you just deploy an app with disabled inputs, they won't do anything anywhere. Of course if you're deploying your own custom app with enabled inputs or ouptuts they will try to do their job whenever they are deployed 2) Index-time props/transforms settings - they will be active either on the initial forwarder (if applicable - like EVENT_BREAKER settings) or on the first "heavy" (based on full Splunk Enterprise installation) component in event's path (except ingest-actions; they will be performed after the initial parsing as well but that's a story for another day ;-)). Splunk will happily ignore them in search-time 3) search-time props/transforms settings - they will be active only on search-heads. You can safely deploy them to components active during ingestion phase (HFs and indexers) and they will simply be ignored in ingestion pipeline  
Hi @AL3Z , are you saying the usualy events with this EventCode are ingested, but sometimes you lose events? In this case, analyze if there was some downtime of the Forwarder or of the connection, ... See more...
Hi @AL3Z , are you saying the usualy events with this EventCode are ingested, but sometimes you lose events? In this case, analyze if there was some downtime of the Forwarder or of the connection, starting from the period where you're sure that you loosed some events. If you're sure that there wasn't any downtime, open a case to Splunk Support. Ciao. Giuseppe
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary... See more...
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary button on the Search and Reporting app but for some reason its not showing up on our instance. Does anyone know if it got removed or moved to somewhere else. Thank you
@gcusello , we're ingesting logs with these event code, but occasionally, we're not receiving all the logs from the DCs into Splunk.
There are several possible causes for that. Starting from wrong permissions on the source side (we don't even know if these are the only events that are not ingested or if you're ingesting any events... See more...
There are several possible causes for that. Starting from wrong permissions on the source side (we don't even know if these are the only events that are not ingested or if you're ingesting any events from the Security journal at all), through input black/whitelisting, to active filtering on HFs/indexers. Don't get me wrong, but from this thread and other similar ones it looks as if your employer bought Splunk license but didn't invest in either trainings for the staff or maintenance services from your friendly local Splunk partner. And you seem to need it.
Hi @shashankk , don't use join because searches are very sow! using my search you extract the common key that permits to correlate events containing the TestMQ and Priority fields, and thesearch di... See more...
Hi @shashankk , don't use join because searches are very sow! using my search you extract the common key that permits to correlate events containing the TestMQ and Priority fields, and thesearch displays the result as you like. then you could also don't diplay the key used for the correlation having exactly the result you want: | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]." | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. "] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]."] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]." ] | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@instance\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" | stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fields - key | fillnull value=0 | addtotals Ciao. Giuseppe
Hi @AL3Z, you should run a searh like the following: index=your_index EventCode=4743 if you haven't results, you have to perform two checks: at first on the Splunk_TA Windows that you're using to... See more...
Hi @AL3Z, you should run a searh like the following: index=your_index EventCode=4743 if you haven't results, you have to perform two checks: at first on the Splunk_TA Windows that you're using to ingest logs, to see if this EventCode is ingested or not. maybe there's a white list or a blacklist the filters this EventCode. if there isn't any filter, see in your Domain Controller if this EventCode is loged on Windows: not al events are logged by default, about this, I cannot help you: you need a Windows specialist. Ciao. Giuseppe
Auditd is a separate mechanism from the "normal" logging used in linux systems (formerly based on a syslog daemon, recently using journald part of systemd package). Auditd is a userspace component i... See more...
Auditd is a separate mechanism from the "normal" logging used in linux systems (formerly based on a syslog daemon, recently using journald part of systemd package). Auditd is a userspace component interacting with kernel auditing subsystem. And that subsystem is meant for auditing. Normal syslog/journald logging is meant for "general logging", which might also include security related events from various parts of the operating system. What is getting logged (and ingested by UF's inputs) can vary depending on: 1) Settings of various system components - for example, sshd might or might not report unsuccessful connection attempts depending on the log verbosity settings in config file 2) How you are storing the events and ingesting them (if you don't forward your logs from journald to syslog but only ingest with UF from /var/log/* files, you'll miss a lot of events that the system generates).
Hi @sanju2408de  Did you get any solution for this problem. I am also stuck in same situation.   Thanks
Hi, Splunk hasn't captured the 4743 events, indicating computer account deletions that occurred yesterday at 2 pm. Where should we investigate to determine the root cause? Thanks
Hi @gcusello @ITWhisperer  In this case I can see the TransactionID is the common field between both the events (TestMQ and Priority) - but I am unable to find how to use the same in the query. C... See more...
Hi @gcusello @ITWhisperer  In this case I can see the TransactionID is the common field between both the events (TestMQ and Priority) - but I am unable to find how to use the same in the query. Can you please help and suggest on it? Or can we do a JOIN based transaction id's (for both the event types - TestMQ & Priority)   | rex field=_raw "(?<TransactionID>\d+-\w+)"  
Hi @iremdoesthings , I suppose that you already know Splunk and SPL, if not, let me know that I can hint some free training to start. Anyway, as you teacher and @PickleRick hinted, the Splunk Secur... See more...
Hi @iremdoesthings , I suppose that you already know Splunk and SPL, if not, let me know that I can hint some free training to start. Anyway, as you teacher and @PickleRick hinted, the Splunk Security Essentials App. is a good starting point to find the searches for your use cases, but anyway, the real staring point is the data that you have available on your Indexer: which one do you have available? You could analyze your data with a simple search (index=* | stats count BY sourcetype) so you can know which data source you have available and you can use. Otherwise, in the Splunk Security Essentials App, there's a very interesting feature, that analyzes you data and says to you which searches you can implement on your data, you can find it in the app at [Data > Data Invenatry]. Ciao. Giuseppe
Hi @mohsplunking , as @richgalloway said, you should install the Add-On also on the HF because the parsing is done on it. The installation on the Indexer depends on your architecture: if you have... See more...
Hi @mohsplunking , as @richgalloway said, you should install the Add-On also on the HF because the parsing is done on it. The installation on the Indexer depends on your architecture: if you have also one or more Search Heads, you don't need to install the Add-On on the Indexers, but your must install it on the SHs. If instead your Indexer is a Stand Alone server (in other words it's an Indexer and a Search Head), you have to install the Add-On on the Indexer. Ciao. Giuseppe
Hi @jaro , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi based on license rules you cannot use the same license on two different LM! If you can’t use only one LM then you must ask splunk support to split your license to two separate license file and in... See more...
Hi based on license rules you cannot use the same license on two different LM! If you can’t use only one LM then you must ask splunk support to split your license to two separate license file and install one into 1st and 2nd to another LM. r. Ismo
As long as you keep the app.conf and app name etc the same this should work. Of course you must increase version and build numbers. 
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same... See more...
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same. The command which i want to configure are the normal Linux commands which are executed on the server using putty like "ps -ef | grep -i otel" and others
Additionally, the current version of the app supports Python 3, as we have incorporated the aob_py3 package.