All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick  Is there any way my alert to send unique data in the time lapse of 24 hr, Like if any event occur with the ID="ABC" it should send email alert one time after that it ignores that event.
Yes im getting multiple occurrences for the same event, as i told you before how splunk is reading my text file.
No. don't use dedup. That's the whole point. Don't use dedup and see if you are finding multiple occurrences of "the same" event.
This ended up working for me, I add the below to my CA yaml. customAgentConfig:   -Dappdynamics.agent.reuse.nodeName=true  -Dappdynamics.agent.reuse.nodeName.prefix=$(APP_NAME)
@PickleRick  Got your point, I have search for single ID and events are not duplicating if i use dedup ID, however on my alerts query i think dedup ID is not working it is giving me results from raw ... See more...
@PickleRick  Got your point, I have search for single ID and events are not duplicating if i use dedup ID, however on my alerts query i think dedup ID is not working it is giving me results from raw events. Events are duplicating the number of records im getting on that ID (without using dedup ID) are equal to my alerts. How can i get real time alerts based on the above scenario? Do i have to configure data on boarding? If yes, an you guide how can i avoid my events to be duplicate.  Here is a example how UF is reading that file, suppose i have 5 events after some time 4 more events generated on that txt file, so the overall count should be 9 but instead of 9 it is showing 14 here is the breakdown of it(5 events in start + 4 events added + 5 events that were before in that file). This is how my data on-boarding.
The problem with fake, made-up data is when it does not accurately represent your real data. We can only provide solutions based on the data you have given. If it does not represent your data closely... See more...
The problem with fake, made-up data is when it does not accurately represent your real data. We can only provide solutions based on the data you have given. If it does not represent your data closely enough,  our solutions may not work with your actual data. Please try to provide representative examples (anonymised as appropriate) which demonstrate why the proposed solution does not work for you.
@Splunk-Star  The user configuration known as "selected fields" is located in $SPLUNK_HOME/splunk/etc/users//user-prefs/local/ui-prefs.conf and is modifiable by interface. Users can change the defa... See more...
@Splunk-Star  The user configuration known as "selected fields" is located in $SPLUNK_HOME/splunk/etc/users//user-prefs/local/ui-prefs.conf and is modifiable by interface. Users can change the default user-prefs.conf file that you set. The choice is fields = ["host","source","sourcetype"] in display.events.fields https://docs.splunk.com/Documentation/Splunk/latest/Admin/Ui-prefsconf#Display_Formatting_Options 
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track s... See more...
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track sequences where any of several error signatures occurs shortly before a system reboot or a related event, such as a kernel panic or cold restart. These error signatures include "EDAC UE errors," "Uncorrected errors," and "Uncorrected (Non-Fatal) errors," among others. Here's the SPL query I've been refining:     index IN (xxxx) sourcetype IN ("xxxx") ("EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" OR "reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*") | append [| eval search=if("true" ="true", "index IN (xxx) sourcetype IN (xxxxxx) shelf IN (*) card IN (*)", "*")] | transaction source keeporphans=true keepevicted=true startswith="*EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" endswith="reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*" maxspan=300s | search closed_txn = 1 | sort 0_time | search message!="*reboot*" | table tj_timestamp, system, ne, message   My primary question revolves around the use of the `transaction` command, specifically the `startswith` and `endswith` parameters. I aim to use multiple conditions (error signatures) to start a transaction and multiple conditions (types of reboots) to end a transaction. Does the `transaction` command support using logical operators such as OR and AND within `startswith` and `endswith` parameters to achieve this? If not, could you advise on how best to structure my query to accommodate these multiple conditions for initiating and concluding transactions? I'm looking to ensure that my query can capture any of the specified start conditions leading to any of the specified end conditions within a reasonable time frame (maxspan=300s), but I've encountered difficulties getting the expected results. Your expertise on the best practices for structuring such queries or any insights on what I might be doing wrong would be greatly appreciated. Thank you for your time and assistance.
OK. First things first. Don't use real-time searches (in your case real-time alerts) unless there is absolutely no other way. Real-time searches hog single CPU on a search tier and one CPU per each i... See more...
OK. First things first. Don't use real-time searches (in your case real-time alerts) unless there is absolutely no other way. Real-time searches hog single CPU on a search tier and one CPU per each indexer on an indexer tier. And keep them allocated for the whole time of the search. Secondly, if you are ingesting the same events over and over again, that's not the alerting problem, that's your onboarding done wrong. Search for a single ID over a longer period of time and see if the events are duplicated. If they are, that's one of your problems. (another - as I said before - is searching real-time).
@michaelteck  The Splunk Documentation has a page that discusses which ports need to be opened, and has diagrams for both standalone and distributed deployments: https://docs.splunk.com/Documentatio... See more...
@michaelteck  The Splunk Documentation has a page that discusses which ports need to be opened, and has diagrams for both standalone and distributed deployments: https://docs.splunk.com/Documentation/Splunk/latest/InheritedDeployment/Ports  https://kinneygroup.com/blog/splunk-default-ports/    If my comment helps, please give it a thumbs up!    
@ITWhisperer 
@PickleRick  I think my alerts results are not giving me results for dedup search, instead it is reading whole file again and again. Since im using text file and it is keep getting amend by applic... See more...
@PickleRick  I think my alerts results are not giving me results for dedup search, instead it is reading whole file again and again. Since im using text file and it is keep getting amend by application service till EOD. So splunk is reading file again and again till the end of day. This is why im getting duplication of events on Splunk. Is there anyway i can avoid events duplication on universal forwarder?
I have set it to real-time monitoring and per-result, what i have identified so far is whenever splunk reads that file it giving me alert based on it. For e.g: If there are 3 logs of Remark="xyz" ... See more...
I have set it to real-time monitoring and per-result, what i have identified so far is whenever splunk reads that file it giving me alert based on it. For e.g: If there are 3 logs of Remark="xyz" and some new record added in the file with any other or same remark it gives me alerts again for those 3 logs (remark="xyz") until the file has done reading.   To avoid this im using dedup ID, my understanding was alerts are based on search query however using this query i don't have duplicated events but my alerts are duplicating. It is very strange for me. below is my search query, index=pro sourcetype=logs Remark="xyz" | dedup ID | table ID, _time. field1, field2, field3  Hope this clears.
Also looking for a solution to this and for using a variable with: customAgentConfig: -Dappdynamics.agent.reuse.nodename.prefix=$name I can set this to a specific name, but would like for the micro... See more...
Also looking for a solution to this and for using a variable with: customAgentConfig: -Dappdynamics.agent.reuse.nodename.prefix=$name I can set this to a specific name, but would like for the microservice name to be picked up instead so I can have one entry in my yaml config.
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying ... See more...
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying to figure out a spunk query that would give me the following output in a table  orderNum key value orderLocation 1234 key1 value1 demoLoc 1234 key2 value2 demoLoc the value from the key-value pair can be an escaped JSON string. we also need to consider this while writing regex.
You are correct, I want to compare apple with apple (or sugar with sugar). Your query has removed the blank rows. But the comparison is failing. It is saying every thing is Notsame. However here t... See more...
You are correct, I want to compare apple with apple (or sugar with sugar). Your query has removed the blank rows. But the comparison is failing. It is saying every thing is Notsame. However here the sugar prod_qual is same. In the real data sets there are many values which are same and few are not same.   Thanks
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it shoul... See more...
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it should be green else Red. Can anyone please guide me how to achieve that.          
Hi @psomeshwar , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability... See more...
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability Solution, as in it acts as Primary/Secondary, so the application runs on either of the node at a time. Now we are integrating our application with Splunk Enterprise for Observability. As part of the solution, we are deploying Splunk Otel Collector + FluentD agent to collect the metrics/logs/traces. Now how do we manage the integration solution, as in if the Application is running on HOST A, I need to make sure both these agents (Splunk Otel Collector + FluentD) to be up and running on HOST A to collect & ingest data into Splunk Enterprise, and the agents on the other HOST B, needs to be IDLE so that we don't ingest data into Splunk. This can be achieved my deploying custom script (to be executed under Cron frequently say 5 mins to check where the Application is Active and start the agent services accordingly). But how do we make sure the data that are ingested into Splunk are appropriate (without any duplicates) when handling this scenario because there are 2 different hosts? We also would like to avoid a drop down in the Dashboard to select appropriate HOST to filter the data based on the HOST? Because this procedure makes hard for the business team to understand where the application is running currently and select the HOST accordingly? so this approach does not make great sense to me. Is there a better approach to handle this situation? In case if we are having Load Balancer for the Application, Are we able to make use of it to tell Splunk otel collector + Fluentd to collect data only from active Host and then send the data through HTTP Event Collector.
Hello @gcusello  I managed to get it to work. The solution I used was: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename cid as cid1 | rename jsonevent.cid a... See more...
Hello @gcusello  I managed to get it to work. The solution I used was: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename cid as cid1 | rename jsonevent.cid as cid2 | eval jcid = coalesce(cid1, cid2) | stats stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY jcid Thanks, this thread helped me a lot