All Topics

Top

All Topics

Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/... See more...
Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/Splunk/9.1.3/AdvancedDev/CustomizeLogin) and successfully completed the customization. Now, the Splunk login screen displays my logo, background image, and footer content. However, I encountered an issue when running Splunk App Inspect on the custom-login app I created. The Splunk App Inspect tool reported 8 failures: The web.conf file contains a [settings] stanza, which is not permitted. Only [endpoint:] and [expose:] stanzas are allowed in web.conf. Line Number: 1 The 'local' directory exists, which is not allowed. All configuration should be in the 'default' directory. The 'local.meta' file exists, which is not permitted. All metadata permissions should be set in 'default.meta'. No README file was found, which should include version support, system requirements, installation, configuration, troubleshooting, and running of the app, or a link to online documentation. I'm wondering if the [endpoint:*] and [expose:*] stanzas in web.conf are necessary for customizing the login screen. Are these stanzas required for login screen changes? All other issues have been fixed. (for production environment) Below is the corrected version of the custom_login app structure based on the Splunk App Inspect recommendations: ``` custom_login |-- default | |-- app.conf | |-- web.conf | |-- restmap.conf | |-- savedsearches.conf | |-- ui_prefs.conf | |-- README | |-- data |-- metadata | |-- default.meta |-- static | |-- appIcon.png | |-- appIcon_2x.png | |-- appIconAlt.png | |-- appIconAlt_2x.png | |-- background.png | |-- fav_logo.png |-- bin | |-- readme.txt |-- appserver | |-- static | |-- background.png | |-- fav_logo.png ``` Here are the contents of the configuration files: **app.conf** ``` [launcher] author = NAME description = <<<<<XXXXXXXXYYYYYY>>>>>>. version = Custom login 1.0 [install] is_configured = 0 [ui] is_visible = 1 label = Custom app login [triggers] reload.web = web.conf reload.restmap = restmap.conf reload.ui_prefs = ui_prefs.conf ``` **restmap.conf** ``` # restmap.conf for custom_login [endpoint:login_background] # REST endpoint for login background image configurations match = /custom_login ``` **ui_prefs.conf** ``` [settings] loginBackgroundImageOption = custom loginCustomBackgroundImage = custom_login:appserver/static/background.png login_content = This is a server managed by my team. For any inquiries, please reach out to us at YYYY.com loginCustomLogo = custom_login:appserver/static/fav_logo.png loginFooterOption = custom loginFooterText = © 2024 XXXXXXX ``` **web.conf** ``` [endpoint:login] [expose:login_background] pattern = /custom_login methods = GET ``` I am currently working in a development environment. Any advice on how to proceed with these changes would be appreciated. Thanks in advance.
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table ... See more...
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table which has the list of all possible values that can occur in a given time period. The lookup table is of the below format Time Value Monday 14: [1300 - 1400] 412790 AA Monday 14: [1300 - 1400]   114556 BN Monday 15: [1400 - 1500] 243764 TY Based on this, in the live count , for the given time period ( let's take  Monday 14: [1300 - 1400] as an example ), if I do a  stats count as Value by Time and I don't get "114556 BN" as one of the values, an alert is to be generated. Where I'm stuck with is matching the time with the values. If I use inputlookup first, I am not able to pass the time from Master Time picker  which will not allow me to check for specific time frame ( in this case an hour ). If I use the index search first, I am able to match the time against the lookup by using | join type=left but I am not able to find the missing values which are not there in the live count but present in the lookup. Would appreciate if I could get some advice on how to go about this. Thanks in advance!
Is there a specific set of permissions for splunk universal forwarders and its user account? Maybe a document that points to this?
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk ... See more...
Trying to uninstall Splunk Enterprise 7.0.1.0 from Windows 10.  I get a message from the uninstall process to "Insert the 'Splunk Enterprise' disk and click OK." The issue is I don't have a "Splunk Enterprise" disk.  Nor is there an msi file to use.   Please advise.
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would yo... See more...
I currently have two different fields Host                     Domain F32432KL34    domain.com I wish to combine these into one field that shows the following: F32432KL34@domain.com How would you suggest going about this?
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. T... See more...
I'm trying to (efficiently) create a chart that collects a count of events, showing the count as a value spanning the previous 24h, over time.  i.e. every bin shows the count for the previous 24h. This is intended to show the evaluations an alert is making every x minutes where it triggers if the count is greater than some threshold value.  I'm adding that threshold to the chart as a static line so we should be able to see the points at which the alert could have triggered. I have the following right now, but it's only showing one data point per day when I would prefer the normal 100 bins   ... | timechart span=1d count | eval threshold=1000   Hope that's not too poorly worded
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve t... See more...
HI All, I want to forward the log data using Splunk Universal forwarder to a specific index of Splunk Indexer. I am running UF and Splunk Indexer inside a docker container. I am able to achieve this by modifying the inputs.conf file of UF after the container is started.   [monitor::///app/logs] index = logs_data   But, after making this change, I have to RESTART my UF container.  I want to ensure when my UF starts, it should send the data to "logs_data" index by default (assuming this index is present in the Splunk Indexer) I tried overriding the default inputs.conf by mounting the locally created inputs.conf to its location Below is the snippet of how I am creating the UF container   splunkforwarder: image: splunk/universalforwarder:8.0 hostname: splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license --answer-yes - SPLUNK_STANDALONE_URL=splunk:9997 - SPLUNK_ADD=monitor /app/logs - SPLUNK_PASSWORD=password restart: always depends_on: splunk: condition: service_healthy volumes: - ./inputs.conf:/opt/splunkforwarder/etc/system/local/inputs.conf   But, I am getting some weird error while container is trying to start.   An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf' fatal: [localhost]: FAILED! => { "changed": false } MSG: Unable to make /home/splunk/.ansible/tmp/ansible-moduletmp-1710787997.6605148-qhnktiip/tmpvjrugxb1 into to /opt/splunkforwarder/etc/system/local/inputs.conf, failed final rename from b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf': [Errno 16] Device or resource busy: b'/opt/splunkforwarder/etc/system/local/.ansible_tmpnskbxfddinputs.conf' -> b'/opt/splunkforwarder/etc/system/local/inputs.conf'​   Looks like, some process is trying to access the inputs.conf while its getting overridden.  Can someone please help me solve this issue?   Thanks
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SY... See more...
As title. I'm updating to UF 9.2.0.1 via SCCM, but a subset of targets are failing to install the update with the dreaded 1603 return code. The behavior is the same whether or not I run the msi as SYSTEM (i.e., via USE_LOCAL_SYSTEM) or not. All the existing forwarders being updated are newer - 8.2+, but mostly 9.1.x. Oddly, if I manually run the same msiexec string with a DA account on the local system, the update usually succeeds. It's baking my noodle why it will work one way but not another. I have msiexec debug logging set up, but it's not giving me anything obvious to work with. I can also usually get it to install if I uninstall the UF and gut the registry of all vestiges of UF, but that's not something I want to do on this many systems. I've read a bunch of other threads with 1603 errors but none of them have been my issue, as far as I can tell. Any ideas as to what the deal is?
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track s... See more...
Hello, I'm currently working on a Splunk query designed to identify and correlate specific error events leading up to system reboots or similar critical events within our logs. My goal is to track sequences where any of several error signatures occurs shortly before a system reboot or a related event, such as a kernel panic or cold restart. These error signatures include "EDAC UE errors," "Uncorrected errors," and "Uncorrected (Non-Fatal) errors," among others. Here's the SPL query I've been refining:     index IN (xxxx) sourcetype IN ("xxxx") ("EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" OR "reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*") | append [| eval search=if("true" ="true", "index IN (xxx) sourcetype IN (xxxxxx) shelf IN (*) card IN (*)", "*")] | transaction source keeporphans=true keepevicted=true startswith="*EDAC* UE*" OR "* Uncorrected error *" OR "* Uncorrected (Non-Fatal) error *" endswith="reboot" OR "*Kernel panic* UE *" OR "* UE ColdRestart*" maxspan=300s | search closed_txn = 1 | sort 0_time | search message!="*reboot*" | table tj_timestamp, system, ne, message   My primary question revolves around the use of the `transaction` command, specifically the `startswith` and `endswith` parameters. I aim to use multiple conditions (error signatures) to start a transaction and multiple conditions (types of reboots) to end a transaction. Does the `transaction` command support using logical operators such as OR and AND within `startswith` and `endswith` parameters to achieve this? If not, could you advise on how best to structure my query to accommodate these multiple conditions for initiating and concluding transactions? I'm looking to ensure that my query can capture any of the specified start conditions leading to any of the specified end conditions within a reasonable time frame (maxspan=300s), but I've encountered difficulties getting the expected results. Your expertise on the best practices for structuring such queries or any insights on what I might be doing wrong would be greatly appreciated. Thank you for your time and assistance.
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying ... See more...
Consider I have multiple such JSON events pushed to splunk.     { "orderNum" : "1234", "orderLocation" : "demoLoc", "details":{ "key1" : "value1", "key2" : "value2" } }     I am trying to figure out a spunk query that would give me the following output in a table  orderNum key value orderLocation 1234 key1 value1 demoLoc 1234 key2 value2 demoLoc the value from the key-value pair can be an escaped JSON string. we also need to consider this while writing regex.
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it shoul... See more...
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it should be green else Red. Can anyone please guide me how to achieve that.          
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability... See more...
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability Solution, as in it acts as Primary/Secondary, so the application runs on either of the node at a time. Now we are integrating our application with Splunk Enterprise for Observability. As part of the solution, we are deploying Splunk Otel Collector + FluentD agent to collect the metrics/logs/traces. Now how do we manage the integration solution, as in if the Application is running on HOST A, I need to make sure both these agents (Splunk Otel Collector + FluentD) to be up and running on HOST A to collect & ingest data into Splunk Enterprise, and the agents on the other HOST B, needs to be IDLE so that we don't ingest data into Splunk. This can be achieved my deploying custom script (to be executed under Cron frequently say 5 mins to check where the Application is Active and start the agent services accordingly). But how do we make sure the data that are ingested into Splunk are appropriate (without any duplicates) when handling this scenario because there are 2 different hosts? We also would like to avoid a drop down in the Dashboard to select appropriate HOST to filter the data based on the HOST? Because this procedure makes hard for the business team to understand where the application is running currently and select the HOST accordingly? so this approach does not make great sense to me. Is there a better approach to handle this situation? In case if we are having Load Balancer for the Application, Are we able to make use of it to tell Splunk otel collector + Fluentd to collect data only from active Host and then send the data through HTTP Event Collector.
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file locatio... See more...
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file location please?
Currently, I need to join information from two different indexes. I cannot show the information as it is confidential, but I can give a general overview of what it should look like Search: index=... See more...
Currently, I need to join information from two different indexes. I cannot show the information as it is confidential, but I can give a general overview of what it should look like Search: index=index1 sourcetype=sourcetype1 | table ApplicationName, ApplicationVersion, ApplicationVendor, cid Result: ApplicationName   ApplicationVersion   ApplicationVendor   cid name                             1.0.3                               vendor                            78fds87324 ... ... Search2: index=index2 sourcetype=sourcetype2 | table hostname, user, cid Result: hostname                          user                    cid domainname                   username        78fds87324 ... ...   What I need is a way to show the ApplicationName, ApplicationVersion, ApplicationVendor, hostname and username all in one table connected through the cid. Anyone have any ideas?
Hello Freinds, Current setup - we have multiple locations in Europe, and each location we have multiple windows servers and those servers' forwarding logs to windows log collector server. from log c... See more...
Hello Freinds, Current setup - we have multiple locations in Europe, and each location we have multiple windows servers and those servers' forwarding logs to windows log collector server. from log collector to collect the logs on splunk cloud.  few sites we are not receiving logs from windows servers, we checked in the GPO policy and its properly configured. while checking gpresult some of the settings not properly applied. i tried gpupdate and tried again. but issue still to be continued.   
Thanks I am trying to extract three fields in below given message "message" : "BatchId : 7, RequestId : 100532188, Msg : Batch status to be update to SUCCESS",   Extract : BatchId ,RequestId ,Sta... See more...
Thanks I am trying to extract three fields in below given message "message" : "BatchId : 7, RequestId : 100532188, Msg : Batch status to be update to SUCCESS",   Extract : BatchId ,RequestId ,Status need to extract SUCCESS .   | rex "BatchId\s*:\s*(?<batch>[^,]+),\s*RequestId\s:\s*(?<RequestID>[^,]+),\s*Msg : Batch status to be update to (?<Status>\w+)"  
There is a practice of setting queueSize in inputs.conf [http://<token>] stanza. queueSize over writes server.conf stanza     [queue=httpInputQ] maxSize   Now if you have multiple tokens with ... See more...
There is a practice of setting queueSize in inputs.conf [http://<token>] stanza. queueSize over writes server.conf stanza     [queue=httpInputQ] maxSize   Now if you have multiple tokens with different queueSize.     inputs.conf [http://1] queueSize=1 [http://2] queueSize=2 [http://3] queueSize=3 [http://4] queueSize=4     Globally only one inputs.conf stanza wins for final httpInputQ size. This setting should only be set if setting 'persistentQueueSize' as well. If there are multiple http inputs configured and each input has set 'queueSize' but persistentQueueSize is not is set, splunkd will create one in-memory queue and pick the 'queueSize' value from first stanza after sorting http stanzas with matching token of first received http event in ascending order. With multiple pipelines configured, each pipeline will create one in-memory queue depending on the first http event received by the pipeline thus each pipeline might have different sized httpInputQ created. If there are multiple http stanzas configured and 'persistentQueueSize' is not set, prefer to set 'maxSize' under 'queue=httpInputQ' stanza in server.conf. So best practice would be to never set per token queueSize in inputs.conf. Instead set one time in server.conf, if not setting persistentQueueSize.     [queue=httpInputQ] maxSize    
Revolutionizing how our customers build resilience across their entire digital footprint. Splunk Community,  We are thrilled to announce an exciting new chapter in Splunk's history: we are... See more...
Revolutionizing how our customers build resilience across their entire digital footprint. Splunk Community,  We are thrilled to announce an exciting new chapter in Splunk's history: we are joining forces and officially becoming part of Cisco. This is a major milestone for Splunk in our ongoing efforts to build a safer and more resilient digital world — and we couldn’t be more excited for what’s ahead. The combined company will bring the full power of network data together with market-leading security and observability solutions. Together, we will provide unparalleled visibility and insights across our customers’ entire digital footprint, enhancing your ability to protect your organization, reduce business risk, and accelerate innovation velocity. Cisco shares Splunk’s passion for building thriving communities, and together, we expect to have even greater learning and collaboration opportunities for you. What does this mean for you?  For the immediate future, nothing truly changes. You should not expect any disruption to your Splunk experience as a result of this acquisition. You can continue to enjoy all of the Splunk Community platforms, programs, and events you know and love. What truly excites us is the potential to amplify the spirit of the Splunk Community on a larger scale. We know that this community thrives on creativity, collaboration, and fun, and with Cisco's help, we look forward to creating even more opportunities for you to share your voices, connect with others, and make a real impact. Together with Cisco, Splunk will deliver: A complete security solution for threat prevention, detection, investigation, and response, utilizing network and endpoint traffic for unparalleled visibility. A comprehensive full-stack observability solution for delivering seamless and reliable digital experiences across multi-cloud and hybrid environments. An Al-powered platform that correlates business and technology data to unlock insights that accelerate innovation and build digital resilience. We’ve collaborated with Splunkers, users and customers to answer the most commonly heard questions. Visit our website for FAQs and more information about Splunk joining Cisco. If you are a Splunk customer and have additional questions, please reach out to your account manager.  We are incredibly excited about our future, and over the coming months, we will continue to keep you updated about exciting developments in this new chapter. We are thrilled to be on this journey with you and we are grateful for your continued trust in Splunk.  ~ The Splunk Community Team    
Hi, I need a Specific Requirement with the time chart in my Dashboard. I have a Single Value Viz. which has the values and trend Comparisions. If I set the time range to the 24 hrs., it shou... See more...
Hi, I need a Specific Requirement with the time chart in my Dashboard. I have a Single Value Viz. which has the values and trend Comparisions. If I set the time range to the 24 hrs., it should display the Last 24 hrs. count in the Value and Previous 24 hrs. Values (Differance)in the trend value. I can Achieve this by adding the span 1d, in the query. But whereas it comes to the Hrs., It won't be like that.  Can anyone help me with this. Thanks in Advance.
Hi , I am comparing two JSON data sets with respect to values of some nested keys in them. The comparison is working fine except that at the end I am getting some blank rows with no data for them... See more...
Hi , I am comparing two JSON data sets with respect to values of some nested keys in them. The comparison is working fine except that at the end I am getting some blank rows with no data for them in the columns except the diff column that I am inserting. I am including the query that I am using. However, since I am using appendcols in this, so the data sets returned by the search commands would be as below respectively: data1={ \"Sugar\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" }, \"Rice\": { \"prod_rate\" : \"80\", \"prod_qual\" : \"OK\" }, \"Potato\": { \"prod_rate\" : \"87\", \"prod_qual\" : \"OK\" } } data2="{ \"Sugar\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" }, \"Wheat\": { \"prod_rate\" : \"50\", \"prod_qual\" : \"Good\" } }"   The actual query with proper search command in place is actually returning some blank rows. How can I remove them from display ? index = data1 | eval grain_name = json_array_to_mv(json_keys(data1)) |mvexpand grain_name |eval data = json_extract(data1, grain_name), qual = json_extract(data, "prod_qual") |table grain_name, qual | appendcols [ search index=data2| eval grain_name2 = json_array_to_mv(json_keys(data2)) | mvexpand grain_name2 | eval data2 = json_extract(data2, grain_name2), qual2 = json_extract(data2, "prod_qual")] |eval diff = if(match (qual, qual2), "Same", "NotSame") |table grain_name, qual, diff