All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  Refer below requested sample query and event details: Kindly suggest.   index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<... See more...
Hi @gcusello  Refer below requested sample query and event details: Kindly suggest.   index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount, count(eval(Priority_Level="Low")) as Low, count(eval(Priority_Level="Medium")) as Medium, count(eval(Priority_Level="High")) as High by TestMQ | fillnull value=0     Sample Events:   240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]. host = testserver2.com source = /test/test.logsourcetype = testscan 240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan  
It's just that the link leads to an old part of the docs site which has apparently been retired.
OK. If you mean the password policy within the Splunk itself, you should be able to find it in the _configtracker index (I'm not sure if it's available for Cloud but I assume it is) - look for change... See more...
OK. If you mean the password policy within the Splunk itself, you should be able to find it in the _configtracker index (I'm not sure if it's available for Cloud but I assume it is) - look for changes to authorize.conf file.
Hi @AC1 , try something like this: Index="xx" label="xx" id=* | stats dc(id) AS id_count Ciao. Giuseppe
One last question: Is my request kind of "monetize your content" like in the link below: https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent  ... which now... See more...
One last question: Is my request kind of "monetize your content" like in the link below: https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent  ... which now leads to a "Hi! This page does not exist, or has been removed from the documentation. " ? Am I looking to something that was previously supported, but now is not? More: quoted: "you can add a license for a third party solution to your Splunk instance and have Splunk enforce it" I am not looking per force to " have Splunk enforce it". If I could do it just within my App that would be fine. Can this be done ? best regards Altin
Thanks for your response @PickleRick  We defined the policy in Splunk cloud SH. Connection SHC -- IDXR -- FORWARDER
Visualization only shows the value you give it so you need to search for the proper value. As you have multiple values of the id field, it's no wonder that you'll get bad results if you expect just ... See more...
Visualization only shows the value you give it so you need to search for the proper value. As you have multiple values of the id field, it's no wonder that you'll get bad results if you expect just one value. So it's up to you to redefine the problem. If you want to somehow decide to return just one of those ids, you have to make that choice. BTW "dedup id" will still return multiple values if you have multiple ids. So please define your desired result more precisely.
Where do you have this policy? In what system? And how is it connected with Splunk?
Hello Splunkers, I wanted to setup an alert for changing password parameters for ex, we have policy of 15 min characters which includes at least 1 number lowercase , 1 number uppercase , 1 special c... See more...
Hello Splunkers, I wanted to setup an alert for changing password parameters for ex, we have policy of 15 min characters which includes at least 1 number lowercase , 1 number uppercase , 1 special characters I want an alert to trigger if someone modifies this password rule.    Thanks!
Hi all, I am trying to use the Single Value Visualization in a dashboard to keep an all time running count of my field "id". The issue I'm running into is I have duplicate logs for "id" that are gi... See more...
Hi all, I am trying to use the Single Value Visualization in a dashboard to keep an all time running count of my field "id". The issue I'm running into is I have duplicate logs for "id" that are giving me an incorrect number. When I am running a search with the SPL below and dedup I get the correct number of events. But when I try to convert that into the Visualization I am having issues. Any help is appreciated, thanks! Index="xx" label="xx" id=* | dedup id
Wait a second. Are you talking about auditd or journald? Because these are two different things. While journald can be viewed as a replacement for a syslog daemon (yes, I know it's a bit of an oversi... See more...
Wait a second. Are you talking about auditd or journald? Because these are two different things. While journald can be viewed as a replacement for a syslog daemon (yes, I know it's a bit of an oversimplification) and needs to be configured to forward data to syslog (or you have to use syslog daemon which can read journald socket), auditd is a different subsystem meant strictly for logging the auditing stuff.
I stumbled across this while seeking a solution this week. I came up with something pretty similar to @patrickp_splunk . With a slight change. I kicked things into json before it comes out of the map... See more...
I stumbled across this while seeking a solution this week. I came up with something pretty similar to @patrickp_splunk . With a slight change. I kicked things into json before it comes out of the map command (because `map` only allowed me to bring back one field). | datamodelsimple \ | map maxsearches=500 search="| tstats count FROM datamodel=$datamodel$ | eval dmName=\"$datamodel$\" | tojson | fields - count,dmName" | extract | table dmName,count  
Hi @shashankk , could you share a sample of your full events for Low, Medium and High Priority? Ciao. Giuseppe
I don't think there is any publicly available mechanics for licensing and license enforcement performed by Splunk and available to third-party solutions (as in you can add a license for a third party... See more...
I don't think there is any publicly available mechanics for licensing and license enforcement performed by Splunk and available to third-party solutions (as in you can add a license for a third party solution to your Splunk instance and have Splunk enforce it). So you're on your own here. If you can implement something like that - good for you.
Hello Splunkers, I need some help in understanding the difference between Auditd logging on Linux and the traditional way of capturing the log files under the var/log/* , what is it that Auditd prov... See more...
Hello Splunkers, I need some help in understanding the difference between Auditd logging on Linux and the traditional way of capturing the log files under the var/log/* , what is it that Auditd provides which we cannot get that from var/log/* Secondly, I'm already collecting the basic Audit files that are under /var/log/ using the standard TA_Nix , if i want to go with Auditd , is there a different Add-on for this , What are the available options. Appreciate some insight on this from experienced techies.   Thank you, Moh...!
Hi @kmorris_splunk  I have the same issue. The link provided leads to an empty page: "Hi! This page does not exist, or has been removed from the documentation. ....." best regards Altin
Thank you very much @isoutamo . I did understand that such application must be hosted outside Splunkbase, but can be referenced from there. But what about how to do the trial-expiry (my main co... See more...
Thank you very much @isoutamo . I did understand that such application must be hosted outside Splunkbase, but can be referenced from there. But what about how to do the trial-expiry (my main concern)? I have searched a lot but did not found anything related. best regards Altin
Hello @PickleRick @gcusello @isoutamo - thanks for your kind response. I am reframing my problem statement here: Refer below Sample events from the logs:   240108 07:12:07 17709 testget1:... See more...
Hello @PickleRick @gcusello @isoutamo - thanks for your kind response. I am reframing my problem statement here: Refer below Sample events from the logs:   240108 07:12:07 17709 testget1: ===> TRN@instance2.RQ1: 0000002400840162931785-AHGM0000bA [Priority=Low,ScanPriority=0, Rule: Default Rule]. 240108 07:12:07 17709 testget1: <--- TRN: 0000002400840162929525-AHGM00015A - S from [RCV.FROM.TEST.SEP2.Q2@QM.ABCD101].   I am having issues while fetching data from 2 stats (TestMQ and Priority_Level) count fields together. Below is the query:     index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount, count(eval(Priority_Level="Low")) as Low, count(eval(Priority_Level="Medium")) as Medium, count(eval(Priority_Level="High")) as High by TestMQ | fillnull value=0     This gives me result like example below:     TestMQ | TotalCount | Low | Medium | High MQNam1 | 120 | 0 | 0 | 0 MQNam2 | 152 | 0 | 0 | 0 ..     The problem is that I am getting "0" value for Low, Medium & High columns - which is not correct. I want to combine both the stats and show the group by results of both the fields. If I run the same query with separate stats - it gives individual data correctly. Case 1: stats count as TotalCount by TestMQ     index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount by TestMQ Example Output: TestMQ | TotalCount MQName | 201       Case 2: stats count as PriorityCount by Priority_Level     index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as PriorityCount by Priority_Level Example Output: Priority_Level | PriorityCount High | 20 Medium | 53 Low | 78     Please help and suggest.
@isoutamo Thanks for your kind response. I tried with the suggested approach. But it doesn't give the expected result. index=Test | rex "(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Lo... See more...
@isoutamo Thanks for your kind response. I tried with the suggested approach. But it doesn't give the expected result. index=Test | rex "(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | chart count BY TestMQ, Priority_Level | fillnull value=0 Getting output as: TestMQ | count Expected output: TestMQ | TotalCount | Low | Medium | High
1. As a general rule of thumb - avoid using system/local. It might not have anything to do with this particular case or even not matter much in your environment in general but it's a good practice to... See more...
1. As a general rule of thumb - avoid using system/local. It might not have anything to do with this particular case or even not matter much in your environment in general but it's a good practice to split your confiugration into apps and maintain it as apps. system/local is the directory with the highest priority (except for the clustered indexers) and you can get into some undesired situations if you put your settings into system/local and can't later overwrite them with apps. See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles 2. json functions are meant for working with json data. I suppose the examples were meant for json events. 3. I didn't notice it before having concentrated on that json part but your INGEST_EVAL, however you write it, has no chance of working if you base it on search-time extracted fields. Remember that most Splunk extractions are search-time. So in order to use part of the event for your lookup, you need to either find it by means of - for example - substr(), l/rtrim() or replace() or extract it as indexed field in order to be able to use it as argument for INGEST_EVAL (you can later assign null-value to it so it doesn't get indexed in the end).