All Topics

Top

All Topics

Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I... See more...
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I mistaken in understanding the order and logic on how this is communicated? Regards
Are you ready to take your Splunk journey to the next level? We invite you to join our elite squad of Customer Advisory Board (CAB) members and be part of shaping the future of Splunk products! ... See more...
Are you ready to take your Splunk journey to the next level? We invite you to join our elite squad of Customer Advisory Board (CAB) members and be part of shaping the future of Splunk products! Why should you consider becoming a CAB member? Let us break it down for you: Direct Impact: Your feedback will directly influence the development of Splunk products. Roadmap Outlook: Get exclusive insights into the Splunk product roadmap. Product Insights: Participate in feedback sessions with the Splunkers who build the products you use daily to help enhance user experiences. Exclusive Preview Access: Be the first to see and provide input on products and features before they launch. Networking: Connect with like-minded professionals and share best practices. But that's not all! CAB members can choose to join one or more of our four specialized advisory boards:  Security ️ Observability  Splunk Cloud Platform Splunk Enterprise Platform We meet quarterly via Zoom, making it convenient for you to attend and share your valuable insights. Plus, all sessions are recorded and shared with CAB members and internal Splunk teams, so your voice will continue to make an impact, even if you can't make it to every meeting. Who are we looking for? Splunk Admins, System Engineers & Architects, Security Engineers, SOC Analysts, Application Developers, Service Owners, SRE’s, Product teams - if you're a daily Splunk user, we want YOU! Ready to sign up and be part of the CAB crew? Click the link below to embark on this exciting journey with us!  Sign Up Here! Let's shape the future of Splunk together!
We’ve enabled AWS PrivateLink for Observability Cloud, giving you an additional inbound connection to send metrics, traces, and API service data to the platform. With PrivateLink, you can now securel... See more...
We’ve enabled AWS PrivateLink for Observability Cloud, giving you an additional inbound connection to send metrics, traces, and API service data to the platform. With PrivateLink, you can now securely send your data directly to the Observability ingest endpoint from your AWS environment. This is the best practice for secure service connections in AWS so you can prevent your sensitive data from traversing the internet. Key Benefits  Utilizing AWS PrivateLink helps improve your security posture so you can maintain regulatory compliance when you need to prevent sensitive data from traversing the internet. Additionally, sending data directly will allow you to save on data transfer costs.  Getting Started To get started, you’d contact the Splunk Sales Engineer aligned to your team and provide a few details, including your AWS Account ID, the AWS Region and VPC endpoint ID. The Splunk team will do the rest to configure your PrivateLink. Splunk Observability Cloud’s PrivateLink is enabled on ALL AWS realms and is limited to connectors in the same AWS region.  Read the product documentation to learn more.
Hi,       is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support?   Many Th... See more...
Hi,       is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support?   Many Thanks
Hi All, I have two csv files.  File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, sessi... See more...
Hi All, I have two csv files.  File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, session_id, error. Basically all the entries from file1.csv for the session_id and errors from file2.csv.  Could you please help how to combine these csv? Note: I am storing the data to CSV as a output lookup since I couldn't find a way to search these via single query. So trying to join from csv.
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibi... See more...
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers   Does anyone know up to which version of the Universal Forwarder is compatibel with an 8.0.x Indexer (with an 8.0.x Heavy Forwarder infront) ?
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the r... See more...
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the risk object score instead of a static risk score in the Risk analysis Adaptive response?  Have been looking at using the Risk factor editor but cant see a way other than setting the static value in the Adaptive response to 100 then creating 100 risk factor like this if('score'="10",0.1,1) if('score'="11",0.11,1) if('score'="12",0.12,1) so on and so on. Thanks       
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on ... See more...
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on all Splunk apps being available from a git repository. How are third-party apps such as "Splunk Add-on for Amazon Web Services (AWS)" supposed to be installed unless extracted to a custom git repository first?
I had an issue with storage.  I was at another site for 2 weeks and we reached the max limit on our drive.  I had to reprovision in VMware and while it was out of storage we had issues, I can't remem... See more...
I had an issue with storage.  I was at another site for 2 weeks and we reached the max limit on our drive.  I had to reprovision in VMware and while it was out of storage we had issues, I can't remember the error message but it was related to storage.  Fixed the storage issue and rebooted and had to reset my certificate and everything looked fine.  A day later we started getting the license issue.  I read the articles in the community.  I didn't fully understand.  I think its polling the environment for the time that my storage limits were reached? It's been 4 days with us being over the licensing limit.  Looking back over the last year, we have never been close to our limits. Any help would be appreciated. 
Hi Team,  I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set.  As a solution, I am pushing data from query having HTTP status ... See more...
Hi Team,  I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set.  As a solution, I am pushing data from query having HTTP status captured to a lookup file. The CSV file consists of following fields: 1. _time 2. 2xx 3. 4xx 4. 5xx Also, I have created a time-based lookup definition. But when I try to plot the graph, "_time" field is not coming up in x-axis.  Can you please help with how this can be achieved? 
Splunk Forwarder did not send any data
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro,... See more...
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro, Mastercard etc. for every 5 minutes) using transaction command and then searching for the card type in the log and then extracting the value using regex in the field named "Card Type".       index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | timechart span=5m sum(girocard) AS "Girocard"     Now I have to modify the query in order to filter it out based on Country and Store, query I am using is-     index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | append [| inputlookup Stores_TimeZones.csv where Store=tkg* ] | timechart span=5m sum(girocard) AS "Girocard" latest(Country) AS Country latest(City) AS City     I am unable to get the output for Country and City, what am I doing wrong? Please help. Thanks in advance
Hi, I cant start the controller, I have attached the error that I am getting. Please suggest how can I solve this issue. Thanks. 
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this a... See more...
How to not send splunk report via email if no result are found .I cannot change it to alert and use number of results  >0 as I need to send it as a report with records . So I need to implement this as a report only not as alert.I have gone through the existing posts but could not find a solution ?   Is there any settings in advanced Edit which could help?
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not... See more...
hi I try to configure my alert with an advanced slot time like this earliest = -60m@m latest = -40m@m But when I save, splunk tell me "The changes you have done to this alert slot time will be not saved" and "the slot time has not been updated, Change the alert type in order to modify the slot time" what is wrong please? And i try to use the cron below, but the cron is not taken into account */20**** Thanks for your help
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP ... See more...
Event and Report extract rules Use the payment business events to identify Transactions which have ACCP clearing status (NPP 1012.NPP 1013) with missing Settlement Notification event NPP 1040 "NPP 1033_CR_INBOUND "NPP 1012 CECARING_INBOUND" • "NPP 1013_RETURN_INBOUND" I "NPP 1040 SETTLEMENT RECEIVED" Report should include the following fields Time from NPP 1033 TXID from NPP 1033 Amount from NPP 1012 or NPP 1013   Already i have created query    index-nch_apps_nonprod applications fis-npp source fis-npp-sit4 ((NPP 1012 CLEARING INBOUND OR NPP 1013 RETURN INBOUND) OR NPP 1033 CR INBOUND or rex field-message "eventName=\"(?<eventName> *?)\"." rex field-message "txId\"(?<txId>. *?)\," Κ I rex field-message "amt=\"(?<amt>.2)\"." rex field-message ibm.datetime-(?<ibm_datetime> *)," + Participant 1 eval Participant substr(txId,1,8) stats values(eventName) as eventName, min(ibt datetime) as Time, values(amt) as amt by (eventName, NPP 1840 SETTLEMENT RECEIVED) < 0 table Time eventName Participant amt where mycount (eventName) >= 3 AND mvfind (eventName, npp 1040) but not getting any result 
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/sp... See more...
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/spk_fwdbck and this mount point has folder and subfolders like from last 3 years and has subfolders like  acs5x apc blackhole bpe cisco-ios oops paloalto pan_dc vpn windows unix threatgrid pan-ext ise ironport firewall f5gtmext f5-asm-tcp can this folders are safe to delete based on the year 2020 to 2023? can we delete complete previous years logs like 2020 if so does it effect anything. Trying to understad this concept. please help.
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_do... See more...
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_domain | rename "sources{}.source_name" as description, value as domain, last_updated as updated, mscore as weight | stats values(type) AS type latest(updated) as updated latest(weight) as weight latest(description) as description latest(associations_type) as associations_type latest(associations_name) as associations_name by domain | fields - count | outputlookup append=t suspicious_domain.csv  
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same c... See more...
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same column)