All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. T... See more...
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. This is intended to show the evaluations an alert is making every x minutes where it triggers if the count is greater than some threshold value.  I'm adding that threshold to the chart as a static line so we should be able to see the points at which the alert could have triggered. I have the following right now, but it's only showing one data point per day when I would prefer the normal 100 bins   ... | timechart span=1d count | eval threshold=1000   Hope that's not too poorly worded
This is kind of sed-y but it should work: (assuming your automatic kv field extraction is working on your json event)   | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s... See more...
This is kind of sed-y but it should work: (assuming your automatic kv field extraction is working on your json event)   | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s*|\s*}|,\s*)//g" | makemv hold delim="\"\"" | mvexpand hold | rex field=hold "(?<key>[^,\s\"]*)\"\s:\s\"(?<value>[^,\s\"]*)" max_match=0 | table orderNum key value orderLocation  
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve t... See more...
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve this by modifying the inputs.conf file of UF after the container is started.   [monitor::///app/logs] index = logs_data   But, after making this change, I have to RESTART my UF container.  I want to ensure when my UF starts, it should send the data to "logs_data" index by default (assuming this index is present in the Splunk Indexer) I tried overriding the default inputs.conf by mounting the locally created inputs.conf to its location Below is the snippet of how I am creating the UF container   splunkforwarder: image: splunk/universalforwarder:8.0 hostname: splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license --answer-yes - SPLUNK_STANDALONE_URL=splunk:9997 - SPLUNK_ADD=monitor /app/logs - SPLUNK_PASSWORD=password restart: always depends_on: splunk: condition: service_healthy volumes: - ./inputs.conf:/opt/splunkforwarder/etc/system/local/inputs.conf   But, I am getting some weird error while container is trying to start.   An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf' fatal: [localhost]: FAILED! => { "changed": false } MSG: Unable to make /home/splunk/.ansible/tmp/ansible-moduletmp-1710787997.6605148-qhnktiip/tmpvjrugxb1 into to /opt/splunkforwarder/etc/system/local/inputs.conf, failed final rename from b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf': [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf'​   Looks like, some process is trying to access the inputs.conf while its getting overridden.  Can someone please help me solve this issue?   Thanks
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SY... See more...
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SYSTEM (i.e., via USE_LOCAL_SYSTEM) or not. All the existing forwarders being updated are newer - 8.2+, but mostly 9.1.x. Oddly, if I manually run the same msiexec string with a DA account on the local system, the update usually succeeds. It's baking my noodle why it will work one way but not another. I have msiexec debug logging set up, but it's not giving me anything obvious to work with. I can also usually get it to install if I uninstall the UF and gut the registry of all vestiges of UF, but that's not something I want to do on this many systems. I've read a bunch of other threads with 1603 errors but none of them have been my issue, as far as I can tell. Any ideas as to what the deal is?
@PickleRick  Is there any way my alert to send unique data in the time lapse of 24 hr, Like if any event occur with the ID="ABC" it should send email alert one time after that it ignores that event.
Yes im getting multiple occurrences for the same event, as i told you before how splunk is reading my text file.
No. don't use dedup. That's the whole point. Don't use dedup and see if you are finding multiple occurrences of "the same" event.
This ended up working for me, I add the below to my CA yaml. customAgentConfig:   -Dappdynamics.agent.reuse.nodeName=true  -Dappdynamics.agent.reuse.nodeName.prefix=$(APP_NAME)
@PickleRick  Got your point, I have search for single ID and events are not duplicating if i use dedup ID, however on my alerts query i think dedup ID is not working it is giving me results from raw ... See more...
@PickleRick  Got your point, I have search for single ID and events are not duplicating if i use dedup ID, however on my alerts query i think dedup ID is not working it is giving me results from raw events. Events are duplicating the number of records im getting on that ID (without using dedup ID) are equal to my alerts. How can i get real time alerts based on the above scenario? Do i have to configure data on boarding? If yes, an you guide how can i avoid my events to be duplicate.  Here is a example how UF is reading that file, suppose i have 5 events after some time 4 more events generated on that txt file, so the overall count should be 9 but instead of 9 it is showing 14 here is the breakdown of it(5 events in start + 4 events added + 5 events that were before in that file). This is how my data on-boarding.
The problem with fake, made-up data is when it does not accurately represent your real data. We can only provide solutions based on the data you have given. If it does not represent your data closely... See more...
The problem with fake, made-up data is when it does not accurately represent your real data. We can only provide solutions based on the data you have given. If it does not represent your data closely enough,  our solutions may not work with your actual data. Please try to provide representative examples (anonymised as appropriate) which demonstrate why the proposed solution does not work for you.
@Splunk-Star  The user configuration known as "selected fields" is located in $SPLUNK_HOME/splunk/etc/users//user-prefs/local/ui-prefs.conf and is modifiable by interface. Users can change the defa... See more...
@Splunk-Star  The user configuration known as "selected fields" is located in $SPLUNK_HOME/splunk/etc/users//user-prefs/local/ui-prefs.conf and is modifiable by interface. Users can change the default user-prefs.conf file that you set. The choice is fields = ["host","source","sourcetype"] in display.events.fields https://docs.splunk.com/Documentation/Splunk/latest/Admin/Ui-prefsconf#Display_Formatting_Options 
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track s... See more...
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track sequences where any of several error signatures occurs shortly before a system reboot or a related event, such as a kernel panic or cold restart. These error signatures include "EDAC UE errors," "Uncorrected errors," and "Uncorrected (Non-Fatal) errors," among others. Here's the SPL query I've been refining:     index IN (xxxx) sourcetype IN ("xxxx") ("EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" OR "reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*") | append [| eval search=if("true" ="true", "index IN (xxx) sourcetype IN (xxxxxx) shelf IN (*) card IN (*)", "*")] | transaction source keeporphans=true keepevicted=true startswith="*EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" endswith="reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*" maxspan=300s | search closed_txn = 1 | sort 0_time | search message!="*reboot*" | table tj_timestamp, system, ne, message   My primary question revolves around the use of the `transaction` command, specifically the `startswith` and `endswith` parameters. I aim to use multiple conditions (error signatures) to start a transaction and multiple conditions (types of reboots) to end a transaction. Does the `transaction` command support using logical operators such as OR and AND within `startswith` and `endswith` parameters to achieve this? If not, could you advise on how best to structure my query to accommodate these multiple conditions for initiating and concluding transactions? I'm looking to ensure that my query can capture any of the specified start conditions leading to any of the specified end conditions within a reasonable time frame (maxspan=300s), but I've encountered difficulties getting the expected results. Your expertise on the best practices for structuring such queries or any insights on what I might be doing wrong would be greatly appreciated. Thank you for your time and assistance.
OK. First things first. Don't use real-time searches (in your case real-time alerts) unless there is absolutely no other way. Real-time searches hog single CPU on a search tier and one CPU per each i... See more...
OK. First things first. Don't use real-time searches (in your case real-time alerts) unless there is absolutely no other way. Real-time searches hog single CPU on a search tier and one CPU per each indexer on an indexer tier. And keep them allocated for the whole time of the search. Secondly, if you are ingesting the same events over and over again, that's not the alerting problem, that's your onboarding done wrong. Search for a single ID over a longer period of time and see if the events are duplicated. If they are, that's one of your problems. (another - as I said before - is searching real-time).
@michaelteck  The Splunk Documentation has a page that discusses which ports need to be opened, and has diagrams for both standalone and distributed deployments: https://docs.splunk.com/Documentatio... See more...
@michaelteck  The Splunk Documentation has a page that discusses which ports need to be opened, and has diagrams for both standalone and distributed deployments: https://docs.splunk.com/Documentation/Splunk/latest/InheritedDeployment/Ports  https://kinneygroup.com/blog/splunk-default-ports/    If my comment helps, please give it a thumbs up!    
@ITWhisperer 
@PickleRick  I think my alerts results are not giving me results for dedup search, instead it is reading whole file again and again. Since im using text file and it is keep getting amend by applic... See more...
@PickleRick  I think my alerts results are not giving me results for dedup search, instead it is reading whole file again and again. Since im using text file and it is keep getting amend by application service till EOD. So splunk is reading file again and again till the end of day. This is why im getting duplication of events on Splunk. Is there anyway i can avoid events duplication on universal forwarder?
I have set it to real-time monitoring and per-result, what i have identified so far is whenever splunk reads that file it giving me alert based on it. For e.g: If there are 3 logs of Remark="xyz" ... See more...
I have set it to real-time monitoring and per-result, what i have identified so far is whenever splunk reads that file it giving me alert based on it. For e.g: If there are 3 logs of Remark="xyz" and some new record added in the file with any other or same remark it gives me alerts again for those 3 logs (remark="xyz") until the file has done reading.   To avoid this im using dedup ID, my understanding was alerts are based on search query however using this query i don't have duplicated events but my alerts are duplicating. It is very strange for me. below is my search query, index=pro sourcetype=logs Remark="xyz" | dedup ID | table ID, _time. field1, field2, field3  Hope this clears.
Also looking for a solution to this and for using a variable with: customAgentConfig: -Dappdynamics.agent.reuse.nodename.prefix=$name I can set this to a specific name, but would like for the micro... See more...
Also looking for a solution to this and for using a variable with: customAgentConfig: -Dappdynamics.agent.reuse.nodename.prefix=$name I can set this to a specific name, but would like for the microservice name to be picked up instead so I can have one entry in my yaml config.
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying ... See more...
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying to figure out a spunk query that would give me the following output in a table  orderNum key value orderLocation 1234 key1 value1 demoLoc 1234 key2 value2 demoLoc the value from the key-value pair can be an escaped JSON string. we also need to consider this while writing regex.
You are correct, I want to compare apple with apple (or sugar with sugar). Your query has removed the blank rows. But the comparison is failing. It is saying every thing is Notsame. However here t... See more...
You are correct, I want to compare apple with apple (or sugar with sugar). Your query has removed the blank rows. But the comparison is failing. It is saying every thing is Notsame. However here the sugar prod_qual is same. In the real data sets there are many values which are same and few are not same.   Thanks