All Topics

Top

All Topics

I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output... See more...
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output must include 10-Oct, 11-Oct, 12.-Oct and 13-Oct each with top 10  table names with highest insert sum       index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort limit=10 +_time -count         Thanks in advance
Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ... See more...
Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ingestion. I see there's a proofpoing email security add on, but it doesn't seem to relate to vpn logs.  Any ideas what add-on\apps will work for it? thanks. 
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm ... See more...
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm not sure how to modify the source code appropriately to make this work.  If I have this basic starting point:   { "type": "splunk.table", "title": "Sample title for testing color", "options": {}, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   Where can I insert titleColor? My Splunkcloud version is Version:9.2.2403.108
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manual... See more...
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manually.  This error prevents me to run the syslog-ng service in systemctl during bootup.  Any idea or help would be appreciated.
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does... See more...
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does anyone know if there is a change event handler for inputs in Dashboard Studio like there is in the XML dashboards? I've not seen anything in the docs, but I could just be looking in the wrong place. Thanks.
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND... See more...
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND status_code -- how do i get this to get only the status codes that are >=199 and <300 --> these belong to my success bucket >=499 --> These belong to my error bucket | eval Derived_Status_Code= case( status_code>=199 and status_code<300,"Success", status_code>=499,"Errors", 1=1,"Others" ``` I do not need anything that is not in the above conditions ) |Table <> |Where Derived_Status_Code IN ("Errors',"Success") I want to avoid where and get this into search using AND Thank you so much for your time
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same ... See more...
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same for 2 other hosts, the number remains the same between refreshes. Is it because it is doing sampling somewhere? If so,  where can I disable the sampling config?
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several ... See more...
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder. Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something? Any help or examples would be great! Thanks in advance.
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks 
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 S... See more...
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 Size is 1234 Upload Speed is 51ms I what to extract the project id , size and the upload time as fields  also regarding the upload time I guess I just need the number right.  
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes... See more...
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes some API calls to Azure to retrieve data from an event hub and store this data in an indexer cluster. If the Heavy Forwarder where the add-on is installed goes down, no logs are retrieved from the event hub. So, what are the best practices for this kind of app, which retrieves logs through API calls, to be more resilient? The same applies to some Cisco add-ons that collect logs from Cisco devices via an API. For now, I will configure the app on another Heavy Forwarder without enabling data collection, but in case of failure, human intervention will be needed. I would be curious to know what solutions you implement for this kind of issue. Thanks Nicolas I'm curious
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month, we’re excited to share some articles that show you new ways to get Cisco and AppDynamics integrated with Splunk. We’ve also updated our  Definitive Guide to Best Practices for IT Service Intelligence (ITSI), and as usual, we’re sharing all the rest of the use case, product tip, and data articles that we’ve published over the past month. Read on to find out more. Splunking with Cisco and AppDynamics Here on the Splunk Lantern team we’ve been busy working with experts in Cisco, AppDynamics, and Spunk to develop articles that show how our products can work together. Here are some of the most recent articles we’ve published, and keep watching out for more Cisco and AppD articles over the coming months! Monitoring Cisco switches, routers, WLAN controllers and access points shows you how to create a comprehensive solution to monitor Cisco network devices in the Splunk platform or in Splunk Enterprise Security. Learn how to get set up, create visualizations, and troubleshoot common problems in this new use case article. Enabling Log Observer Connect for AppDynamics teaches you how to configure Log Observer Connect for AppDynamics, allowing you to access the right logs in Splunk Log Observer Connect with a single click, all while providing troubleshooting context from AppDynamics. Looking for more Cisco and AppDynamics use cases? Check out our Cisco and AppDynamics data descriptor pages for more configuration information, use cases and product tips, and please let us know in the comments what other articles you’d like to see! ITSI Best Practices The Definitive Guide to Best Practices for IT Service Intelligence is a must-read resource for ITSI administrators, with essential guidelines that help you to unlock the full potential of ITSI. We’ve just updated this resource with fresh articles to help you ensure optimal operations and exceptional end-user experiences. Using dynamic entity rule configurations is helpful for anyone who often adds or removes entities from their configurations. Learn how to create a rule configuration that updates immediately and without the need for service configuration changes, reducing the time and risk of error that can result from manually reconfiguring entity filter rules. If you use the ITSI default aggregation policy, you might not know that you shouldn’t be using this as your primary aggregation policy. Learn why and how to build policies that better fit your needs in Utilizing policies other than the default policy. Building your own custom threshold templates shows you how to use and customize the 33 ITSI out-of-the-box thresholding templates with the ability to configure time policies, choose different thresholding algorithms, and adjust sensitivity configurations. Finally, Knowing proper adaptive threshold configurations explains how to best use adaptive thresholding in the most effective way possible, helping you to avoid confusing or noisy configurations.   These four new articles are just some of many articles in the Definitive Guide to Best Practices for IT Service Intelligence, so if you’re looking to improve how you work with ITSI then don’t miss this helpful resource. The Rest of This Month’s New Articles Here’s everything else we’ve published over the month: Maximizing performance with the latest Splunk platform capabilities Monitoring LangChain LLM applications with Splunk Introduction to the Splunk ACS Github Action CI/CD Starter Integrating Kubernetes and Splunk Observability Cloud Expanding AWS log ingestion capabilities with custom logs in Splunk Data Manager Using Ingest Processor to convert JSON logs into metrics Using generative AI to write and explain SPL searches We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
Hi Guys,   I hope someone can help me out or give me a pointer here. When  I run my searches I always get events in the future. I usually fix the time picker so it stops it but afterwards, I have t... See more...
Hi Guys,   I hope someone can help me out or give me a pointer here. When  I run my searches I always get events in the future. I usually fix the time picker so it stops it but afterwards, I have to place the events in order and it's just adding a step for every search I make. Is there a way I can implement some type of SPL to make sure that I only get dates in the current time instead of the future?        
Hi Team, The xml for my Dashboard consists of multiple search queries within a panel. What can I add to it to make the Dashboard automatically refresh along with the panels?  I have followed the d... See more...
Hi Team, The xml for my Dashboard consists of multiple search queries within a panel. What can I add to it to make the Dashboard automatically refresh along with the panels?  I have followed the documentation (http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML) and included refresh interval in the form attribute and set the refresh type and refresh interval for individual panels using the <search> element. <form refresh="30"> <form> <row> <panel> <table> <search> <query> ... </query> <earliest>-60m@m</earliest> <latest>now</latest> <refresh>60</refresh> <refreshType>delay</refreshType> </search> </table> </panel> </row> </form> Here, i am using div for each table query and appending these child tables to list under the parent table in a dropdown manner using the javascript.   With this implementation, refresh is not working at the specified interval and the dropdown table will get exit at every refresh interval and we would need to reload the entire dashboard to see the dropdown content in the child table.
Hello Splunkers!! I am getting "Bad allocation" error on all the Splunk dashboard panel. Please help me to identify the potential root cause.  
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authenticati... See more...
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#2:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#3:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#4:8089 Authentication Failed GetRemoteAuthToken [1964778 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#1:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetBundleListTransaction [1964778 DistributedPeerMonitorThread] - Unable to get bundle list from peer: https://OLDIDX#2:8089 due to: Connect Timeout; exceeded 60000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#3:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#4:8089 due to: Connect Timeout; exceeded 5000 milliseconds All OLDIDX are old servers, turned off and shut down! None of SHC Members has OLDIDX#* in DistributedSeach conf Recently i update a V7 to V8 Infrastructure. I also searched all .conf for all ip of OLDIDX#*, none of them was found. WHERE are those "artifact" stored? Is there something in "raft" of new SHC? Need to remove alla SHC conf, and redo it from begin?   This messages in splunkd.log appears ONLY DURING the restart of SHC. During the days, using the SHC, i never had, and still i don't have any type of similar message. Thanks.
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users creat... See more...
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users created but this specific user had inherited Power roles. But despite users are not allowed to modify permissions even for own dashboards or alerts.  Can you please suggest ? Thank you Stives
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where res... See more...
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where response is missing. How can I calculate C?
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to in... See more...
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to ingest and filter/transform the event logs before they leave our environment and go to our MSSP Splunk Cloud. Ideally, I want the Universal Forwarders (UF) to use the local site EPs. However, in the case that those are unavailable, I would like the UFs to failover to use the EPs at another site. I do not want to have the UFs use the EPs at another site by default, as this will increase WAN costs, so I can't simply list all the servers in the defaultGroup. For example: [tcpout] defaultGroup=site_one_ingest [tcpout:site_one_ingest] disabled=false server=10.1.0.1:9997,10.1.0.2:9997 [tcpout:site_two_ingest] disabled=true server=10.2.0.1:9997,10.2.0.2:9997 Is there any way to configure the UFs to prefer the local Edge Processors (site_one_ingest), but then to failover to the second site (site_two_ingest) if those systems are not available? Is it also possible for the configuration to support automated failback/recovery?
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\... See more...
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\local\indexes.conf [default] [volume:cold11] path = E:\Splunk-Cold maxVolumeDataSizeMB = 12000000 [volume:hot11] path = D:\Splunk-Hot-Warm maxVolumeDataSizeMB = 1000000   that I can live with, but on our Search Heads here's how we point on the volumes, and this don't look right to me: C:\Program Files\Splunk\etc\apps\_1-LDC_COMMON\local\indexes.conf [volume:cold11] path = $SPLUNK_DB [volume:hot11] path = $SPLUNK_DB   should the stanzas on the Search Heads match the ones on our Indexers?