All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello , we're ingesting logs with these event code, but occasionally, we're not receiving all the logs from the DCs into Splunk.
There are several possible causes for that. Starting from wrong permissions on the source side (we don't even know if these are the only events that are not ingested or if you're ingesting any events... See more...
There are several possible causes for that. Starting from wrong permissions on the source side (we don't even know if these are the only events that are not ingested or if you're ingesting any events from the Security journal at all), through input black/whitelisting, to active filtering on HFs/indexers. Don't get me wrong, but from this thread and other similar ones it looks as if your employer bought Splunk license but didn't invest in either trainings for the staff or maintenance services from your friendly local Splunk partner. And you seem to need it.
Hi @shashankk , don't use join because searches are very sow! using my search you extract the common key that permits to correlate events containing the TestMQ and Priority fields, and thesearch di... See more...
Hi @shashankk , don't use join because searches are very sow! using my search you extract the common key that permits to correlate events containing the TestMQ and Priority fields, and thesearch displays the result as you like. then you could also don't diplay the key used for the correlation having exactly the result you want: | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]." | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. "] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]."] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]." ] | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@instance\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" | stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fields - key | fillnull value=0 | addtotals Ciao. Giuseppe
Hi @AL3Z, you should run a searh like the following: index=your_index EventCode=4743 if you haven't results, you have to perform two checks: at first on the Splunk_TA Windows that you're using to... See more...
Hi @AL3Z, you should run a searh like the following: index=your_index EventCode=4743 if you haven't results, you have to perform two checks: at first on the Splunk_TA Windows that you're using to ingest logs, to see if this EventCode is ingested or not. maybe there's a white list or a blacklist the filters this EventCode. if there isn't any filter, see in your Domain Controller if this EventCode is loged on Windows: not al events are logged by default, about this, I cannot help you: you need a Windows specialist. Ciao. Giuseppe
Auditd is a separate mechanism from the "normal" logging used in linux systems (formerly based on a syslog daemon, recently using journald part of systemd package). Auditd is a userspace component i... See more...
Auditd is a separate mechanism from the "normal" logging used in linux systems (formerly based on a syslog daemon, recently using journald part of systemd package). Auditd is a userspace component interacting with kernel auditing subsystem. And that subsystem is meant for auditing. Normal syslog/journald logging is meant for "general logging", which might also include security related events from various parts of the operating system. What is getting logged (and ingested by UF's inputs) can vary depending on: 1) Settings of various system components - for example, sshd might or might not report unsuccessful connection attempts depending on the log verbosity settings in config file 2) How you are storing the events and ingesting them (if you don't forward your logs from journald to syslog but only ingest with UF from /var/log/* files, you'll miss a lot of events that the system generates).
Hi @sanju2408de  Did you get any solution for this problem. I am also stuck in same situation.   Thanks
Hi, Splunk hasn't captured the 4743 events, indicating computer account deletions that occurred yesterday at 2 pm. Where should we investigate to determine the root cause? Thanks
Hi @gcusello @ITWhisperer  In this case I can see the TransactionID is the common field between both the events (TestMQ and Priority) - but I am unable to find how to use the same in the query. C... See more...
Hi @gcusello @ITWhisperer  In this case I can see the TransactionID is the common field between both the events (TestMQ and Priority) - but I am unable to find how to use the same in the query. Can you please help and suggest on it? Or can we do a JOIN based transaction id's (for both the event types - TestMQ & Priority)   | rex field=_raw "(?<TransactionID>\d+-\w+)"  
Hi @iremdoesthings , I suppose that you already know Splunk and SPL, if not, let me know that I can hint some free training to start. Anyway, as you teacher and @PickleRick hinted, the Splunk Secur... See more...
Hi @iremdoesthings , I suppose that you already know Splunk and SPL, if not, let me know that I can hint some free training to start. Anyway, as you teacher and @PickleRick hinted, the Splunk Security Essentials App. is a good starting point to find the searches for your use cases, but anyway, the real staring point is the data that you have available on your Indexer: which one do you have available? You could analyze your data with a simple search (index=* | stats count BY sourcetype) so you can know which data source you have available and you can use. Otherwise, in the Splunk Security Essentials App, there's a very interesting feature, that analyzes you data and says to you which searches you can implement on your data, you can find it in the app at [Data > Data Invenatry]. Ciao. Giuseppe
Hi @mohsplunking , as @richgalloway said, you should install the Add-On also on the HF because the parsing is done on it. The installation on the Indexer depends on your architecture: if you have... See more...
Hi @mohsplunking , as @richgalloway said, you should install the Add-On also on the HF because the parsing is done on it. The installation on the Indexer depends on your architecture: if you have also one or more Search Heads, you don't need to install the Add-On on the Indexers, but your must install it on the SHs. If instead your Indexer is a Stand Alone server (in other words it's an Indexer and a Search Head), you have to install the Add-On on the Indexer. Ciao. Giuseppe
Hi @jaro , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi based on license rules you cannot use the same license on two different LM! If you can’t use only one LM then you must ask splunk support to split your license to two separate license file and in... See more...
Hi based on license rules you cannot use the same license on two different LM! If you can’t use only one LM then you must ask splunk support to split your license to two separate license file and install one into 1st and 2nd to another LM. r. Ismo
As long as you keep the app.conf and app name etc the same this should work. Of course you must increase version and build numbers. 
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same... See more...
Hi Team, I was looking to configure the custom command execution like getting the output of ps -ef command or the mq query count. Can some one please help on how to create a monitoring for the same. The command which i want to configure are the normal Linux commands which are executed on the server using putty like "ps -ef | grep -i otel" and others
Additionally, the current version of the app supports Python 3, as we have incorporated the aob_py3 package.
Thanks for you answer. I've already attempted the second solution you provided, which involved installing the app on the latest Add-On Builder version and validating it. Unfortunately, this appr... See more...
Thanks for you answer. I've already attempted the second solution you provided, which involved installing the app on the latest Add-On Builder version and validating it. Unfortunately, this approach didn't resolve the issue. In the event that we decide to develop the app from scratch, could we successfully add it to Splunkbase as the next version?
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, Sy... See more...
Hello, Splunkers!   In the current test environment, System A functions as the master license server, while System B operates as the slave utilizing System A's license. Unfortunately, System B lacks direct access to System A's master server. When executing the query below on System B, it yields the message "license usage logging not available for slave licensing instances" in the search results: spl index=_internal source="*license_usage.log"   Is there a method to check licensing usage per index through a search query on the Indexer server in System B? Your assistance is greatly appreciated! Thank you in advance.  
Hello,  I still did not find the solution.  I will look into back in 2017 stuffs!  Thank you
It's OKAY now. In next triggered notable, it displayed. Thank you @gcusello. 
I am not sure when they came in, but looking at a Splunk 7 instance I have, there is a mix of searchString and <search><query> in the same dashboard. Can you try converting the <searchString> to sea... See more...
I am not sure when they came in, but looking at a Splunk 7 instance I have, there is a mix of searchString and <search><query> in the same dashboard. Can you try converting the <searchString> to search/query and then addding an event handler outside the <query>, i.e. <search> <query> your_search </query> <finalized> <set token="job_sid">$job.sid$</set> </finalized> </search> Not sure when/if finalized was changed to <done>, but I have seen <preview> and <progress> event handlers in old dashboards along with <finalized>. See if any of this works.