All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @shashankk  source="MQlogs.txt" host="test" sourcetype="MQ" | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@\\... See more...
Hi @shashankk  source="MQlogs.txt" host="test" sourcetype="MQ" | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@\\w+\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" |table _raw TestMQ key priority ```| stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fillnull value=0 | addtotals```   Could you pls run this and update us the results screenshot.. when i run this one, the priority is not extracted. looks like something wrong. pls suggest, thanks.  
Thanks @tscroggins !
That's "by design". You only generate results for those days when you had results. That's how tstats works. You need to use timechart along with tstats and use the prestats feature of tstats. |tsta... See more...
That's "by design". You only generate results for those days when you had results. That's how tstats works. You need to use timechart along with tstats and use the prestats feature of tstats. |tstats prestats=t count where index=index_name sourcetype=xxx BY _time span=1d | timechart span=1d count  
Hello and thank you everyone for the help. What i try to get out the existing data (2024-01-08T04:53:13.028149Z) : UdateDate - YYYY-MM-DD i.e. 2021-08-02 UpdateTime - HH:MM i.e. 13:36
Hi @gcusello  I have added below more lines of the sample event file - please help me find the right key. Or if not possible with the correlation Key - how to proceed with the JOIN in this case... See more...
Hi @gcusello  I have added below more lines of the sample event file - please help me find the right key. Or if not possible with the correlation Key - how to proceed with the JOIN in this case? Kindly guide and suggest.   240108 07:12:07 17709 testget1: ===> TRN@instance2.RQ1: 0000002400840162931785-AHGM0000bA [Priority=Low,ScanPriority=0, Rule: Default Rule]. 240108 07:12:07 17709 testget1: <--- TRN: 0000002400840162929525-AHGM00015A - S from [RCV.FROM.TEST.SEP2.Q2@QM.ABCD101].    
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=... See more...
Hello All, I need to fetch the dates in the past 7 days where events are lesser than average event count. I used the below SPL: - |tstats count where index=index_name sourcetype=xxx BY _time span=1d |eventstats avg(count) AS avg_count However, in scenario where on a particular day no events are ingested, the result skips those dates, that is does not return the dates with event count as zero. For example: It skips showing the highlighted rows in the below table: - _time count avg_count 2024-01-01 0 240 2024-01-02 240 240 2024-01-03 0 240 2024-01-04 0 240 2024-01-05 240 240 2024-01-06 240 240 2024-01-07 0 240   And gives below as the result: - _time count event_count 2024-01-02 240 240 2024-01-05 240 240 2024-01-06 240 240   Thus, need your guidance to resolve this problem. Thanking you Taruchit
Hi @eilonh  The Enterprise Security(ES) is a Premium App and there is no trial version or free download version available.  One idea is ES Guided Product Tour https://www.splunk.com/en_us/form/ent... See more...
Hi @eilonh  The Enterprise Security(ES) is a Premium App and there is no trial version or free download version available.  One idea is ES Guided Product Tour https://www.splunk.com/en_us/form/enterprise-security-tour.html the other idea is as said on the previous reply.. you should contact Splunk Sales team and they should provide you the ES for a POC purpose.  hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.   
Hi @shashankk , as I said, the problem is to identify a key contained in both the types of your logs: the ones with the TestMQ field and the ones containing Priority filed. I identified, from your ... See more...
Hi @shashankk , as I said, the problem is to identify a key contained in both the types of your logs: the ones with the TestMQ field and the ones containing Priority filed. I identified, from your sample few logs the regex to extract Q1 or Q2 or Q3, but evidently it isn't sufficient. can you identify a common key to use for correlation? If you haven't this common key it's very hard to correlate events without any relation. Maybe, if you could share more samples, with more TestMQ, I could help you in key identification and extraction, but anyway, the only approach I see is the one I described: find a common key for correlation. Ciao. Giuseppe
Hi @jbv , As said in the previous reply, the landing page/home page of Splunk got re-designed in recent versions.  However, you can create a dashboard which will look very similar to that "Data Summ... See more...
Hi @jbv , As said in the previous reply, the landing page/home page of Splunk got re-designed in recent versions.  However, you can create a dashboard which will look very similar to that "Data Summary" and keep it on your landing page/home page. If you could update us, which index/source/sourcetype you are looking to monitor, we can help you create the data summary for you. hope you got some ideas, thanks. 
Hi @gcusello - Thank you for you continuos support. I am able to proceed next with your suggestion but now stuck at one point. Need your help on it.  Kindly suggest. Query Used:  index=test_index ... See more...
Hi @gcusello - Thank you for you continuos support. I am able to proceed next with your suggestion but now stuck at one point. Need your help on it.  Kindly suggest. Query Used:  index=test_index source=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@\\w+\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" | stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fillnull value=0 | addtotals Getting results as below: Total count (Q1+Q2) is getting added to Q1 only. And Q2 is remaining null (as shown in below example) key | TestMQ | Low | Medium | High | Total Q1 | TEST.SEP.Q1 | 20 | 20 | 30 | 70 | TEST.SEP2.Q1 | TEST.SEP3.Q1 Q2 | TEST.SEP.Q2 | 0 | 0 | 0 | 0 | TEST.SEP2.Q2 | TEST.SEP3.Q2  Please guide and suggest.
Hi @AL3Z  As seen on previous reply, to troubleshoot this issue, lot more details are required from your side.  Any changes recently done on those DC systems inputs.conf / apps / addons  etc Lets ... See more...
Hi @AL3Z  As seen on previous reply, to troubleshoot this issue, lot more details are required from your side.  Any changes recently done on those DC systems inputs.conf / apps / addons  etc Lets say you were expecting the 4743 at 5pm yesterday. Pls check if you have events around that time from that particular windows box (search for 4pm to 6pm events from that windows box)   As said in other posts, the good questions will receive good answers. the more details you provide, the more better answers/suggestions we can help you with. Thanks. 
@PickleRick , Are you missing any other events?  Nope only 4743 Are you having connection problems?  I dnt think so / How to check Are you getting any errors in _internal?   How to check ? ... See more...
@PickleRick , Are you missing any other events?  Nope only 4743 Are you having connection problems?  I dnt think so / How to check Are you getting any errors in _internal?   How to check ? Are you hitting thruput limits?    yes Do you ingest all events from the beginning or just current ones?  Yes we are ingesting all events from beginning.
OK. So you have a different problem. Are you missing any other events? Are you having connection problems? Are you getting any errors in _internal? Are you hitting thruput limits? Do you ingest ... See more...
OK. So you have a different problem. Are you missing any other events? Are you having connection problems? Are you getting any errors in _internal? Are you hitting thruput limits? Do you ingest all events from the beginning or just current ones?  
# Import IIS module Import-Module WebAdministration # Get the list of Application Pools $appPools = Get-ChildItem IIS:\AppPools # Display the App Pool names and their statuses in key-value pairs ... See more...
# Import IIS module Import-Module WebAdministration # Get the list of Application Pools $appPools = Get-ChildItem IIS:\AppPools # Display the App Pool names and their statuses in key-value pairs foreach ($appPool in $appPools) { $status = Get-WebAppPoolState -Name $appPool.name Write-Output "appPoolName=$($appPool.name), appPoolStatus=$($status.Value)" }   You can expand with more details inf you like. You can run by either calling with powershell in your stanza or via cmd and call it with powershell that way.
Hi @jbv, the default Splunk Home Page is changed in the last versions, but anyway, you can see the available data sources with a simple search like index=* in addition in the Search & Reporting App... See more...
Hi @jbv, the default Splunk Home Page is changed in the last versions, but anyway, you can see the available data sources with a simple search like index=* in addition in the Search & Reporting App, you have the Search history with the list of all last searches that you runned in your Splunk and a new feature (called Table View) created for the people with few Splunk knowledge. Ciao. Giuseppe
Hi Basically you must be a paid customer for ES and named as contact person for downloading license or as @gcusello said you must be a splunk partner with assosiate (or higher) level and have fulfil ... See more...
Hi Basically you must be a paid customer for ES and named as contact person for downloading license or as @gcusello said you must be a splunk partner with assosiate (or higher) level and have fulfil NFR license requirements to get it. The last option could be that you ask it for PoC from Splunk/Splunk partner, but as it's quite complicated to set up correctly, it's better to ask that Splunk partner will do it to you. Also show.splunk.com need to be a partner to get access into it. r. Ismo
Adding to what @gcusello and @richgalloway already said, if it's a standard Splunk-supported app (I suppose by TA_Linux you mean the TA_nix but I can't be 100% sure), it will have its own docs page s... See more...
Adding to what @gcusello and @richgalloway already said, if it's a standard Splunk-supported app (I suppose by TA_Linux you mean the TA_nix but I can't be 100% sure), it will have its own docs page saying on which components it should/can be installed. If it's a third-party supplied independently written app it might have such doc page as well. Generally speaking, Splunk apps contain settings which can be active on various components (either in search-time or in index-time) but if an app is properly written (and as far as I remember, there are checks which make sure that you can upload to Splunkbase a badly written app; at least badly written in this context), you can typically deploy your app on all tiers and each tier will only "use" the part of the app which applies to said tier. So your app may contain: 1) Input/output definitions - in an Splunkbase-supplied app they will be set as disabled by default; you have to explicitly enable them so if you just deploy an app with disabled inputs, they won't do anything anywhere. Of course if you're deploying your own custom app with enabled inputs or ouptuts they will try to do their job whenever they are deployed 2) Index-time props/transforms settings - they will be active either on the initial forwarder (if applicable - like EVENT_BREAKER settings) or on the first "heavy" (based on full Splunk Enterprise installation) component in event's path (except ingest-actions; they will be performed after the initial parsing as well but that's a story for another day ;-)). Splunk will happily ignore them in search-time 3) search-time props/transforms settings - they will be active only on search-heads. You can safely deploy them to components active during ingestion phase (HFs and indexers) and they will simply be ignored in ingestion pipeline  
Hi @AL3Z , are you saying the usualy events with this EventCode are ingested, but sometimes you lose events? In this case, analyze if there was some downtime of the Forwarder or of the connection, ... See more...
Hi @AL3Z , are you saying the usualy events with this EventCode are ingested, but sometimes you lose events? In this case, analyze if there was some downtime of the Forwarder or of the connection, starting from the period where you're sure that you loosed some events. If you're sure that there wasn't any downtime, open a case to Splunk Support. Ciao. Giuseppe
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary... See more...
Hi, Were currently deploying our internal Splunk instance and were looking for a way to monitoring the data sources that have logged in our instance. I saw that previously there was a Data Summary button on the Search and Reporting app but for some reason its not showing up on our instance. Does anyone know if it got removed or moved to somewhere else. Thank you
@gcusello , we're ingesting logs with these event code, but occasionally, we're not receiving all the logs from the DCs into Splunk.