All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Will the Splunk DB connection task stop when the index is full
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk applicati... See more...
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk application? Any exception need to be made?  Thanks
I'm trying to discover my source input.conf file that is responsible for pulling in the WinEventLogs.  Our original implementation was back in 2019, and completed by another SME that has since moved ... See more...
I'm trying to discover my source input.conf file that is responsible for pulling in the WinEventLogs.  Our original implementation was back in 2019, and completed by another SME that has since moved on.   When we implemented Splunk Cloud there was many other onsite components implemented, incuding an IDM server.  Since moving to the Victoria Experience we no longer utilize an IDM server, but have the rest of the resources in placed as shown in my attached..  That said, I'm just trying to confirm where to filter my oswin logs from, but not convinced I have identified the source.  While I found the inputs.conf file under Splunk_TA_windows (where I'd expect it to be) on the deployment server, I'm not confident it's responsible for this data input. This is because all my entries in the  stanza specific for WinEventLog ... has a disable = 1.  So while I want to believe, I cannot.  I've look over mulmore importantly where are my WinEventLogs truly being sourced from (which inputs.conf)?  I've review my resources on the Deployment Server, DMZ Forwarder and Syslog UFW Server  and not finding anything else that would be responsible, nor anything installed regarding Splunk_TA_windows, however I am indeed getting plenty of data, and trying to be more efficient with our ingest and looking to filter some of these type of logs out.  TIA   
I'm wondering if anyone could advise on how to best standardize a log of events with different fields. Basically, I have a log with about 50 transaction types (same source and sourcetype), and each e... See more...
I'm wondering if anyone could advise on how to best standardize a log of events with different fields. Basically, I have a log with about 50 transaction types (same source and sourcetype), and each event can have up to 20 different fields based on a specific field, ActionType. Here are a few sample events with some sample/generated data: 2025-02-10 01:09:00, EventId="6", SessionId="123abc",  ActionType="Logout" 2025-02-10 01:08:00, EventId="5", SessionId="123abc", ActionType="ItemPurchase", ItemName="Item2",  Amount="200.00", Status="Failure" 2025-02-10 01:07:00, EventId="4", SessionId="123abc", ActionType="ItemPurchase", ItemName="Item1", Amount="500.00", Status="Success", FailureReason="Not enough funds" 2025-02-10 01:06:00, EventId="3", SessionId="123abc" ActionType="ProfileUpdate", ElementUpdated="Password", NewValue="*******", OldValue="***********", Status="Failure", FailureReason="Password too short" 2025-02-10 01:05:00, EventId="2", SessionId="123abc" ActionType="ProfileUpdate", ElementUpdated="Email", NewValue="NewEmail@somenewdomain.com", OldValue="OldEmail@someolddomain.com", Status="Success" 2025-02-10 01:04:00, EventId="1", SessionId="123abc", ActionType="Login", IPAddress="10.99.99.99", Location="California", Status="Success" I'd like to put together a table with user-friendly EventDescription, like below: Time: SessionId Action EventDescription 2025-02-10 01:04:00 123abc LogIn User successfully logged in from IP 10.99.99.99 (California). 2025-02-10 01:05:00 123abc ProfileUpdate User failed to update password (Password too short) 2025-02-10 01:06:00 123abc ProfileUpdate User successfully updated email from NewEmail@somenewdomain.com to OldEmail@someolddomain.com 2025-02-10 01:07:00 123abc ItemPurchase User successfully purchased item1 for $500.00 2025-02-10 01:08:00 123abc ItemPurchase User failed to purchase item2 for $200.00 (insufficient funds) 2025-02-10 01:09:00 123abc LogOut User logged out successfully   Given that each action will have different fields, what's the best way to approach this, given that there could be about 50 different events (possibly more in the future).  I was initially thinking this can be done using a series of case statements, like the one below.  However, this approach doesn't seem too scalable or maintainable given the number of events and possible fields for each one: eval EventDescription=case(EventId="LogIn", case(Status="Success", "User successfully logged in from IP ".IpAddress." (Location)", 1=1, "User failed to login"), EventId="Logout......etc I was also thinking of using a macro to extract the field and compose an EventDescription, which would be easier to maintain since the code for each Action would be isolated, but I don't think execution 50 macros in one search is the best way to go.  Is there a better way to do this?  Thanks!
For a particular sourcetype I am facing log ingestion issue. Getting below error.  As checked with the team, this log file can not be split. So is there any solution to resolve this issue.  
Hello All, We’re recently encountering an issue when editing a classic dashboard in Splunk. Whenever we try to edit a dashboard containing a "mailto" protocol, we receive the following error: Use... See more...
Hello All, We’re recently encountering an issue when editing a classic dashboard in Splunk. Whenever we try to edit a dashboard containing a "mailto" protocol, we receive the following error: Uses scheme: "mailto", but the only acceptable schemes are: {"https", "http"} However, dashboards without the "mailto" protocol are working fine and we are able to edit them without any issues. Has anyone experienced this before? Is there a known solution or workaround to bypass or resolve this issue, allowing us to edit dashboards that include the "mailto" protocol? would appreciate any guidance or suggestions. Thanks in advance! I
I've installed the Splunk Add-On Builder but the UI is blank/won't load...I've tried installing on my HF (Heavy Forwarder) and my DS (Deployment Server) (HF is under a lot of load) but still the prob... See more...
I've installed the Splunk Add-On Builder but the UI is blank/won't load...I've tried installing on my HF (Heavy Forwarder) and my DS (Deployment Server) (HF is under a lot of load) but still the problem persists.  
Dear Splunkers!! Following the migration of our Splunk server from version 8.1.1 to 9.1.1, we have encountered persistent KV Store failures. The service terminates unexpectedly multiple times post-m... See more...
Dear Splunkers!! Following the migration of our Splunk server from version 8.1.1 to 9.1.1, we have encountered persistent KV Store failures. The service terminates unexpectedly multiple times post-migration. Issue Summary: As a workaround, I renewed the server.pem certificate and rebuilt the MongoDB folder. This temporarily resolves the issue, and KV Store starts working as expected. However, the corruption reoccurs the following day, requiring the same manual interventions. Request for Permanent Resolution: I seek a permanent fix to prevent KV Store from repeatedly failing. Kindly provide insights into the root cause and recommend a robust solution to ensure KV Store stability post-migration. Looking forward to your expert guidance.
How do I exclude 6 names from my dashboards? They come up in all my multiselects and several panels 
Hi all, I am trying to figure out a way to, based on the data available in the table below, add a column to the Yesterday and Last Week's tables with the delta between the values: The que... See more...
Hi all, I am trying to figure out a way to, based on the data available in the table below, add a column to the Yesterday and Last Week's tables with the delta between the values: The queries in the panels are simple stats counts grouped by Site (BDC or SOC) with the addtotals command specified. To display the values for yesterday and last week I am using time shifts within the query. As an example, this is the "yesterday's" timeshift: [| makeresults | addinfo | eval earliest=info_min_time - 86400 | eval latest=info_max_time - 86400 | table earliest latest]  I need to add a column in both the Yesterday and LastWeek's tables that shows the volume's delta in comparison with Today. I am trying to pass the results of the first query as a token so I can reference it in the other queries and use eval to calculate the delta, but I can't make it work. This is the line I have added to the JSON to pass the result as a token: "eventHandlers": [ { "type": "action.setToken", "options": { "tokens": { "todayVolume": "$result.Count$" } } } ],  When I try this approach, Splunk complains about the token "$result.Count$" hasn't been set. I was also exploring the idea of using chain searches, but I think Dynamic Tokens are a cleaner more efficient solution. I'd appreciate if I could some assistance with figuring this out. Thank you in advance.
Hi, We noticed for the Splunk Add-on for Microsoft Cloud Services that CIM mapping is not enabled for all the Sourcetypes. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Source... See more...
Hi, We noticed for the Splunk Add-on for Microsoft Cloud Services that CIM mapping is not enabled for all the Sourcetypes. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Sourcetypes/ In particular for the mscs:kql sourcetype we are ingesting Azure Network logs via sourcetype="mscs:kql" Type=AZFWNetworkRule. I would have expected this Add On to have Network Datamodel CIM mapping included without having to do this ourselves (which we can if required).  Is this the best Add On to use (or is there a better option) if you want more CIM mapping coverage by default or have you had to do manual CIM mapping when using this TA? thanks
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writin... See more...
Hey guys, my el basically tells me that we're going to be deep diving on the indexes in our env to extract some usage data and optimize some of the intake. We will mostly be in the search app, writing queries to pull this info. Usually in the audit index, trying to find what KO's/indexes/searches/etc are being used, whats not being used and just overall monitoring. any advice or tips on this?
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected ... See more...
Hello, I have a requirement in dashboard. My multiselect input should remove ALL (default value) if I select any value other than that automatically and ALL should return if I deselect the selected value... Please help me to get this result? <input type="multiselect" token="app_name"> <label>Application Name</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>app_name</fieldForLabel> <fieldForValue>app_name</fieldForValue> <search base="base_search"> <query> |stats count by app_name </query> </search> <valuePrefix>app_name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> </input>
Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root ... See more...
Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root CA. It can establish a connection and retrieve logs as expected. However, when using the application created, I am seeing the error message. I’ve double-checked the values, and everything seems to be the same. In our testing environment, it works, but the only difference I noticed is that the root CA certificate is in .csr format. Should I convert it to .pem, as we did in the testing environment? -0700 ERROR ExecProcessor - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/TA_case/bin/case.py" HTTPSConnectionPool(host='<HiddenForSensitivityPurpose>', port=443): Max retries exceeded with url: <HiddenForSensitivityPurpose>caseType=Service+Case&fromData=2025-02-06+17%3A23&endDate=2025-02-06+21%3A23 (Caused by SSLError(SSLCertverificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))  
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial versi... See more...
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial version.  #splunkcloud #Webhookallowlist
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with t... See more...
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with the default password in server.conf it reads this first and therefor won’t read the correct SSL password in the apps server.conf file stopping it from working. Is there a better way of doing this so that I don’t need to write a script to hash out the SSL section in server.conf?  
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", ... See more...
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", "ORIGIN_Product": "Infrastructure"} What's wrong in my transforms.conf configuration. Any help much appreciated. transforms.conf [severity] REGEX = "level":\s\"(?<severity>\w+) SOURCE_KEY = fields:level FORMAT = severity::"INFO" WRITE_META = true  
Will the index size exceed the maximum to delete data or stop writing data
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PRE... See more...
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = JSON TRUNCATE = 20000   and this props.conf on my SH: [sony_waf] KV_MODE = none AUTO_KV_JSON = false   Props.conf on my UF: (which is there from before) [sony_waf] NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true When I done this, duplicate events are populating. When I remove my INDEXED_EXTRACTIONS from indexers and keep it in UF props.conf... logs are not being ingested. Tried to give KV_MODE = json by removing KV_MODE and AUTO_KV_JSON in SH still the same duplication. completely confused here. Now even though I remove everything what I have given still duplicate logs coming. Checked in log path from source no duplicate logs are showing. even I have given crcsalt still the same issue. Please guide me to give the correct config in correct place...
Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1    ... See more...
Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1          Andy      IT 2          Chris      Bus 3          Nike        Pay   In the above table, I would like to add another column called Company and map value based on Dept column as below If Dept is IT, then the value for Company as XXXX If Dept is Bus, then the value for Company is YYYY If Dept is Pay, then the value for Company is ZZZZ and the final table should look like S.No    Name    Dept    Comp 1          Andy      IT           XXXX 2          Chris      Bus       YYYY 3          Nike        Pay       ZZZZ   @ITWhisperer Dashboard