All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table ... See more...
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table which has the list of all possible values that can occur in a given time period. The lookup table is of the below format Time Value Monday 14: [1300 - 1400] 412790 AA Monday 14: [1300 - 1400]   114556 BN Monday 15: [1400 - 1500] 243764 TY Based on this, in the live count , for the given time period ( let's take  Monday 14: [1300 - 1400] as an example ), if I do a  stats count as Value by Time and I don't get "114556 BN" as one of the values, an alert is to be generated. Where I'm stuck with is matching the time with the values. If I use inputlookup first, I am not able to pass the time from Master Time picker  which will not allow me to check for specific time frame ( in this case an hour ). If I use the index search first, I am able to match the time against the lookup by using | join type=left but I am not able to find the missing values which are not there in the live count but present in the lookup. Would appreciate if I could get some advice on how to go about this. Thanks in advance!
Is there a specific set of permissions for splunk universal forwarders and its user account? Maybe a document that points to this?
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk ... See more...
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk Enterprise" disk.  Nor is there an msi file to use.   Please advise.
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would yo... See more...
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would you suggest going about this?
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. T... See more...
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. This is intended to show the evaluations an alert is making every x minutes where it triggers if the count is greater than some threshold value.  I'm adding that threshold to the chart as a static line so we should be able to see the points at which the alert could have triggered. I have the following right now, but it's only showing one data point per day when I would prefer the normal 100 bins   ... | timechart span=1d count | eval threshold=1000   Hope that's not too poorly worded
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve t... See more...
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve this by modifying the inputs.conf file of UF after the container is started.   [monitor::///app/logs] index = logs_data   But, after making this change, I have to RESTART my UF container.  I want to ensure when my UF starts, it should send the data to "logs_data" index by default (assuming this index is present in the Splunk Indexer) I tried overriding the default inputs.conf by mounting the locally created inputs.conf to its location Below is the snippet of how I am creating the UF container   splunkforwarder: image: splunk/universalforwarder:8.0 hostname: splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license --answer-yes - SPLUNK_STANDALONE_URL=splunk:9997 - SPLUNK_ADD=monitor /app/logs - SPLUNK_PASSWORD=password restart: always depends_on: splunk: condition: service_healthy volumes: - ./inputs.conf:/opt/splunkforwarder/etc/system/local/inputs.conf   But, I am getting some weird error while container is trying to start.   An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf' fatal: [localhost]: FAILED! => { "changed": false } MSG: Unable to make /home/splunk/.ansible/tmp/ansible-moduletmp-1710787997.6605148-qhnktiip/tmpvjrugxb1 into to /opt/splunkforwarder/etc/system/local/inputs.conf, failed final rename from b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf': [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf'​   Looks like, some process is trying to access the inputs.conf while its getting overridden.  Can someone please help me solve this issue?   Thanks
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SY... See more...
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SYSTEM (i.e., via USE_LOCAL_SYSTEM) or not. All the existing forwarders being updated are newer - 8.2+, but mostly 9.1.x. Oddly, if I manually run the same msiexec string with a DA account on the local system, the update usually succeeds. It's baking my noodle why it will work one way but not another. I have msiexec debug logging set up, but it's not giving me anything obvious to work with. I can also usually get it to install if I uninstall the UF and gut the registry of all vestiges of UF, but that's not something I want to do on this many systems. I've read a bunch of other threads with 1603 errors but none of them have been my issue, as far as I can tell. Any ideas as to what the deal is?
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track s... See more...
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track sequences where any of several error signatures occurs shortly before a system reboot or a related event, such as a kernel panic or cold restart. These error signatures include "EDAC UE errors," "Uncorrected errors," and "Uncorrected (Non-Fatal) errors," among others. Here's the SPL query I've been refining:     index IN (xxxx) sourcetype IN ("xxxx") ("EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" OR "reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*") | append [| eval search=if("true" ="true", "index IN (xxx) sourcetype IN (xxxxxx) shelf IN (*) card IN (*)", "*")] | transaction source keeporphans=true keepevicted=true startswith="*EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" endswith="reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*" maxspan=300s | search closed_txn = 1 | sort 0_time | search message!="*reboot*" | table tj_timestamp, system, ne, message   My primary question revolves around the use of the `transaction` command, specifically the `startswith` and `endswith` parameters. I aim to use multiple conditions (error signatures) to start a transaction and multiple conditions (types of reboots) to end a transaction. Does the `transaction` command support using logical operators such as OR and AND within `startswith` and `endswith` parameters to achieve this? If not, could you advise on how best to structure my query to accommodate these multiple conditions for initiating and concluding transactions? I'm looking to ensure that my query can capture any of the specified start conditions leading to any of the specified end conditions within a reasonable time frame (maxspan=300s), but I've encountered difficulties getting the expected results. Your expertise on the best practices for structuring such queries or any insights on what I might be doing wrong would be greatly appreciated. Thank you for your time and assistance.
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying ... See more...
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying to figure out a spunk query that would give me the following output in a table  orderNum key value orderLocation 1234 key1 value1 demoLoc 1234 key2 value2 demoLoc the value from the key-value pair can be an escaped JSON string. we also need to consider this while writing regex.
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it shoul... See more...
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it should be green else Red. Can anyone please guide me how to achieve that.          
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability... See more...
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability Solution, as in it acts as Primary/Secondary, so the application runs on either of the node at a time. Now we are integrating our application with Splunk Enterprise for Observability. As part of the solution, we are deploying Splunk Otel Collector + FluentD agent to collect the metrics/logs/traces. Now how do we manage the integration solution, as in if the Application is running on HOST A, I need to make sure both these agents (Splunk Otel Collector + FluentD) to be up and running on HOST A to collect & ingest data into Splunk Enterprise, and the agents on the other HOST B, needs to be IDLE so that we don't ingest data into Splunk. This can be achieved my deploying custom script (to be executed under Cron frequently say 5 mins to check where the Application is Active and start the agent services accordingly). But how do we make sure the data that are ingested into Splunk are appropriate (without any duplicates) when handling this scenario because there are 2 different hosts? We also would like to avoid a drop down in the Dashboard to select appropriate HOST to filter the data based on the HOST? Because this procedure makes hard for the business team to understand where the application is running currently and select the HOST accordingly? so this approach does not make great sense to me. Is there a better approach to handle this situation? In case if we are having Load Balancer for the Application, Are we able to make use of it to tell Splunk otel collector + Fluentd to collect data only from active Host and then send the data through HTTP Event Collector.
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file locatio... See more...
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file location please?
Currently, I need to join information from two different indexes. I cannot show the information as it is confidential, but I can give a general overview of what it should look like Search: index=... See more...
Currently, I need to join information from two different indexes. I cannot show the information as it is confidential, but I can give a general overview of what it should look like Search: index=index1 sourcetype=sourcetype1 | table ApplicationName, ApplicationVersion, ApplicationVendor, cid Result: ApplicationName   ApplicationVersion   ApplicationVendor   cid name                             1.0.3                               vendor                            78fds87324 ... ... Search2: index=index2 sourcetype=sourcetype2 | table hostname, user, cid Result: hostname                          user                    cid domainname                   username        78fds87324 ... ...   What I need is a way to show the ApplicationName, ApplicationVersion, ApplicationVendor, hostname and username all in one table connected through the cid. Anyone have any ideas?
Hello Freinds, Current setup - we have multiple locations in Europe, and each location we have multiple windows servers and those servers' forwarding logs to windows log collector server. from log c... See more...
Hello Freinds, Current setup - we have multiple locations in Europe, and each location we have multiple windows servers and those servers' forwarding logs to windows log collector server. from log collector to collect the logs on splunk cloud.  few sites we are not receiving logs from windows servers, we checked in the GPO policy and its properly configured. while checking gpresult some of the settings not properly applied. i tried gpupdate and tried again. but issue still to be continued.   
Thanks I am trying to extract three fields in below given message "message" : "BatchId : 7, RequestId : 100532188, Msg : Batch status to be update to SUCCESS",   Extract : BatchId ,RequestId ,Sta... See more...
Thanks I am trying to extract three fields in below given message "message" : "BatchId : 7, RequestId : 100532188, Msg : Batch status to be update to SUCCESS",   Extract : BatchId ,RequestId ,Status need to extract SUCCESS .   | rex "BatchId\s*:\s*(?<batch>[^,]+),\s*RequestId\s:\s*(?<RequestID>[^,]+),\s*Msg : Batch status to be update to (?<Status>\w+)"  
There is a practice of setting queueSize in inputs.conf [http://<token>] stanza. queueSize over writes server.conf stanza     [queue=httpInputQ] maxSize   Now if you have multiple tokens with ... See more...
There is a practice of setting queueSize in inputs.conf [http://<token>] stanza. queueSize over writes server.conf stanza     [queue=httpInputQ] maxSize   Now if you have multiple tokens with different queueSize.     inputs.conf [http://1] queueSize=1 [http://2] queueSize=2 [http://3] queueSize=3 [http://4] queueSize=4     Globally only one inputs.conf stanza wins for final httpInputQ size. This setting should only be set if setting 'persistentQueueSize' as well. If there are multiple http inputs configured and each input has set 'queueSize' but persistentQueueSize is not is set, splunkd will create one in-memory queue and pick the 'queueSize' value from first stanza after sorting http stanzas with matching token of first received http event in ascending order. With multiple pipelines configured, each pipeline will create one in-memory queue depending on the first http event received by the pipeline thus each pipeline might have different sized httpInputQ created. If there are multiple http stanzas configured and 'persistentQueueSize' is not set, prefer to set 'maxSize' under 'queue=httpInputQ' stanza in server.conf. So best practice would be to never set per token queueSize in inputs.conf. Instead set one time in server.conf, if not setting persistentQueueSize.     [queue=httpInputQ] maxSize    
Hi, I need a Specific Requirement with the time chart in my Dashboard. I have a Single Value Viz. which has the values and trend Comparisions. If I set the time range to the 24 hrs., it shou... See more...
Hi, I need a Specific Requirement with the time chart in my Dashboard. I have a Single Value Viz. which has the values and trend Comparisions. If I set the time range to the 24 hrs., it should display the Last 24 hrs. count in the Value and Previous 24 hrs. Values (Differance)in the trend value. I can Achieve this by adding the span 1d, in the query. But whereas it comes to the Hrs., It won't be like that.  Can anyone help me with this. Thanks in Advance.
Hi , I am comparing two JSON data sets with respect to values of some nested keys in them. The comparison is working fine except that at the end I am getting some blank rows with no data for them... See more...
Hi , I am comparing two JSON data sets with respect to values of some nested keys in them. The comparison is working fine except that at the end I am getting some blank rows with no data for them in the columns except the diff column that I am inserting. I am including the query that I am using. However, since I am using appendcols in this, so the data sets returned by the search commands would be as below respectively: data1={ \"Sugar\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" }, \"Rice\": { \"prod_rate\" : \"80\", \"prod_qual\" : \"OK\" }, \"Potato\": { \"prod_rate\" : \"87\", \"prod_qual\" : \"OK\" } } data2="{ \"Sugar\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" }, \"Wheat\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" } }"   The actual query with proper search command in place is actually returning some blank rows. How can I remove them from display ? index = data1 | eval grain_name = json_array_to_mv(json_keys(data1)) |mvexpand grain_name |eval data = json_extract(data1, grain_name), qual = json_extract(data, "prod_qual") |table grain_name, qual | appendcols [ search index=data2| eval grain_name2 = json_array_to_mv(json_keys(data2)) | mvexpand grain_name2 | eval data2 = json_extract(data2, grain_name2), qual2 = json_extract(data2, "prod_qual")] |eval diff = if(match (qual, qual2), "Same", "NotSame") |table grain_name, qual, diff
Hello, I have set a email alert. ID is the unique identifier my source file is text file which updates after some time whenever new activity is capture, Forwarder will re read that file again, ... See more...
Hello, I have set a email alert. ID is the unique identifier my source file is text file which updates after some time whenever new activity is capture, Forwarder will re read that file again, to avoid duplication of search im using dedup ID,  if I don't use dedup ID in my search it will show me numbers of result which is not equal to the file. For e.g: my file have 3 logs before after some activity 2 more logs added in file total count is 5, however splunk is showing 8 events in GUI. to avoid this im using dedup ID.  Now, the issue is my alert is on real time im getting alot duplicated results in my email. Below is my query index=pro sourcetype=logs Remark="xyz" | dedup ID | table ID, _time, field1. field2, field3, field4 using the above query im getting correct result on GUI but numbers of alerts generate on email.
Hello everyone,  In my splunk journey, I've to make a documentation for the installation of the Universal Forwarder. Ours Forwarders will be install VMs who are on a private network so we need some... See more...
Hello everyone,  In my splunk journey, I've to make a documentation for the installation of the Universal Forwarder. Ours Forwarders will be install VMs who are on a private network so we need some configuration on the network to let the Universal Forwarder to send data to the indexers splunk. Ours indexers are install on another private network, we created a rule on the network to receive data on the port 9997 of the Splunk server. I'm looking for network prerequisites before the installation of the fowarder. What rules we have to create on the Forwarder's network ? What port we have to open on the Forwarder's network ? Do we need to create a specific flow for the Forwarder to send data to the indexers? What protocol we have to setup on the Forwarder's network? Thank for all who read me,