All Topics

Top

All Topics

We've updated the look and feel of the team landing page in Splunk Observability.  The team landing page is where you can go to see any active alerts and dashboard groups that have been linked to a... See more...
We've updated the look and feel of the team landing page in Splunk Observability.  The team landing page is where you can go to see any active alerts and dashboard groups that have been linked to a team in Splunk Observability. When you use the teams feature, each team can curate the custom dashboards and detectors that appear on their team landing page. For instance, as a team lead I could collect all the dashboards related to the services that my team is responsible for, and link my team's detectors so that any alerts on those services will appear here. This is a good way to help a new team member quickly find useful content.  The old team landing page. The refreshed team landing page contains all the same information it had before -- team name, description, linked alerts and dashboards -- but its look and feel has been updated to better align with other areas of Splunk Observability. Team dashboards are arranged in a searchable, sortable table. We can also directly see the list of alerts linked to our team, in addition to the count.  The team landing page's new look. To create and manage teams, click the Settings icon, then click Teams for a full list of teams in your organization. Click here to read more about teams in Splunk Observability. 
How to highlight empty fields in the dashboard in colours . Simple step pls 
John:x:/home/John:/bin/bash    is there a way to extract the field from above with colon separated.  We have many users in the above format from /etc/passwd  John - username  x - passwd  /ho... See more...
John:x:/home/John:/bin/bash    is there a way to extract the field from above with colon separated.  We have many users in the above format from /etc/passwd  John - username  x - passwd  /home/John - path 
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re highlighting some great new updates to our Getting Started Guide for Enterprise Security (ES) that provide you with easy ways to get going on this powerful platform, as well as new data articles for MS Teams. As usual, we’re also sharing the rest of the new articles we’ve published this month. Read on to see what’s new. Getting Started with Splunk Enterprise Security Lantern hosts Getting Started Guides for all of Splunk’s products across the platform, plus premium Security and Observability products. Our Getting Started Guides are great for onboarding new users, but even if you’re more experienced they can be a great help to ensure you haven’t missed any key steps or resources that can help you take your product usage to the next level. This month, we’ve been busy updating our Getting Started Guide for Enterprise Security. This new guide now features new videos from Splunk experts walking you through how to use Enterprise Security dashboards, new guidance on how to find and adopt use cases, and links to all of the resources you’ll need to be successful with ES. You can use our updated Getting Started Guide as your comprehensive toolkit for mastering Enterprise Security effortlessly. Check it out to see how you can enhance your security posture and stay ahead of challenges with our expert guidance at your fingertips. Microsoft Teams Data Articles We’ve also published some helpful configuration guidance for users of the Microsoft Teams Add-on for Splunk. This add-on collects Teams call record data, and our guide on Getting started with the Microsoft Teams Add-on for Splunk shows you how to retrieve that data. Once you’re set up, you can check out the guides Getting started with Microsoft Teams call record data and Getting started with Microsoft Teams call record data and Azure Functions to learn how call record data is made available, and how best to utilize the data. Everything Else New This Month Here are the rest of the new articles that we’ve published this month: Automating Know Your Customer continuous monitoring requirements Integrating REST endpoints with On-Call Monitoring major Cloud Service Providers (CSPs) Getting started with the Google ChromeOS App for Splunk We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
We are indexing email metadata logs from various regions (china,US,Mexico,Italy) in Splunk Cloud. The retention policy of these metadata logs is 270 days. We want to change the retention policy of f... See more...
We are indexing email metadata logs from various regions (china,US,Mexico,Italy) in Splunk Cloud. The retention policy of these metadata logs is 270 days. We want to change the retention policy of few of the regions. For example, we want to store China metadata logs only for 30 days and all other logs for 270 days. How to achieve this?  Appreciate any kind of input here.
In my splunk search for getting the date of Nessus plugins feed version used in a scan I get the number returned in the orginal format used of year-month-date-time (for example November 7th 2023 at 1... See more...
In my splunk search for getting the date of Nessus plugins feed version used in a scan I get the number returned in the orginal format used of year-month-date-time (for example November 7th 2023 at 1200 would display as 202311071200) that I would like to convert into a readable format that I can then manipluate in splunk such as if i want to get the epoch time. How would I go about doing this?
Apparently my Google-Fu isn't the best and I can't find an explanation. Can someone please enlighten me?  I have a lookup table that looks like this:  CIDR, ip_address 24, 1.2.3.4/24 23, 5.6.7.8/... See more...
Apparently my Google-Fu isn't the best and I can't find an explanation. Can someone please enlighten me?  I have a lookup table that looks like this:  CIDR, ip_address 24, 1.2.3.4/24 23, 5.6.7.8/23 I wanted events with source ips that match the ip addresses in the lookup table with destination ips that do not match the ip addresses in the lookup table.  I ran the following query, and this appears to work (unless its actually not??)  index="index1" | lookup lookup1 ip_address as src_ip OUTPUTNEW ip_address as address | where dest_ip!=address My confusion stems from the fact that ip_address is in CIDR notation. The way my mind is processing this query is that a new field called address is created, and the value of dest_ip is compared against the value of address. However, the value of address is in CIDR notation, and dest_ip is not.  Is address treated as a list and the value of dest_ip is checked against each item in the list?   
Hi, I  have created a Cluster Map that show number of counts  based on number of ASA blocked actions.  The circle size is based on number of hits. A bigger circle represent more counts than a small ... See more...
Hi, I  have created a Cluster Map that show number of counts  based on number of ASA blocked actions.  The circle size is based on number of hits. A bigger circle represent more counts than a small circle.  So far so good. It looks ok, but would be even better if I could change color based on number of counts/hits.  Is it also possible change color based on destination portnumber (80,23,22++)    Thanks  Geir
Hi Splunkers, in our Splunk Cloud environment we had 2 need: Reassign knowledge object owner Reassign Knowledge object app The first point management is well known and we already applied it, ass... See more...
Hi Splunkers, in our Splunk Cloud environment we had 2 need: Reassign knowledge object owner Reassign Knowledge object app The first point management is well known and we already applied it, assigning all KOs created by us to a service account. I don't remember if the second one is possible: can I reassign a KO app? For example, if we assigned an alert to search app, is it possible to change it with another one? And if yes, how?
I have two inputs on my dashboard studio (json) and the inputs are dynamic, the first is a multiselect dynamic input and the second is a dynamic dropdown. The first input shows a list of rule names a... See more...
I have two inputs on my dashboard studio (json) and the inputs are dynamic, the first is a multiselect dynamic input and the second is a dynamic dropdown. The first input shows a list of rule names and the second dropdown input is a sensitive label. When I select multiple rule names from the multiselect input dropdown, I would like to see the corresponding sensitive labels populated dynamically in the next dropdown. But currently it is showing a warning sign as no results found.  Whereas the same is working fine when I select single value in the multiselect input. Please help to fix.
we have recently upgraded from splunk 8.x to 9.x after which all python scripts are failing with ssl errors we have updated all packages according to python 3.7 but still it throws ssl error File "/... See more...
we have recently upgraded from splunk 8.x to 9.x after which all python scripts are failing with ssl errors we have updated all packages according to python 3.7 but still it throws ssl error File "/apps/splunk/etc/apps/xxxx/bin/splunklib/binding.py", line 32, in <module> import ssl File "/apps/splunk/lib/python3.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory. In path /apps/splunk/lib i have libssl.so.1.0.0  
Hello, We have migrated our standalone installation of Splunk Enterprise to a "Small enterprise distributed deployment". This is a really small distributed deployment because the load is essentiall... See more...
Hello, We have migrated our standalone installation of Splunk Enterprise to a "Small enterprise distributed deployment". This is a really small distributed deployment because the load is essentially on indexing capacity, even though it's less than 100Go daily (our licence allows 80Go) and search load is really low. So we have : - 1 Search Head - 2 indexers (no cluster) The search head also acts as license master and deployment server (just HEC configs and indexes replication to indexers). Now the question is : Is it possible to install Monitoring Console on the Search Head node ? We've well seen the recommandation here, and especially : "When you set up the monitoring console in distributed mode, it creates one search group for each server role, identified cluster, or custom group. Unless you use a "splunk_server_group" or the "splunk_server" option, only search peers that are members of the indexer group are searched by default. Because all searches that run on the monitoring console instance follow this behavior, non-monitoring console searches might have incomplete results." I'm not sure I really understand this, but as we only have 2 indexers and since they are the nodes that we want to put in the indexer group on the MC side, could it really leads to incomplete searchs ? It seems that this is the same advice given on dashboard, via the MC general setup page when trying to activate in distributed mode : "Do not configure the DMC in distributed mode if this is a production search head. Doing so can change the behavior of all searches on this instance. This is dangerous and unsupported." As already said, load consideration is secondary because we do not have a heavy searching activity. Thanks a lot.
Hi all, I am trying to set up SAML with my Custom IDP but Splunk is returning an Unsupported algorithm error even if the algorithm type is correct in the SAML response. Can you kindly help/guide m... See more...
Hi all, I am trying to set up SAML with my Custom IDP but Splunk is returning an Unsupported algorithm error even if the algorithm type is correct in the SAML response. Can you kindly help/guide me regarding how to troubleshoot this issue? I have attached my SAML response and SAML configuration settings in this Post. Error from Splunk Cloud: SAML configurations in Splunk Cloud     Saml response <ns2:SignedInfo> 8 <ns2:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> 9 <ns2:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> 10 <ns2:Reference URI="#id-S7lv9JFItlthO8Lzr"> 11 <ns2:Transforms> 12 <ns2:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> 13 <ns2:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> 14 </ns2:Transforms> 15 <ns2:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> 16 <ns2:DigestValue> 17 Kq/4Vh3rMrw0H/yFvAnmr0KH8qrAbqYrU+stI/WODZY=18 </ns2:DigestValue> 19 </ns2:Reference> 20 </ns2:SignedInfo>  Thanks
Hi, I have an issue here with the fishbucket of the Universal Forwarder. I have tried to look for quite a lot of documentation, but it seems that there is too little documentation, and there are also... See more...
Hi, I have an issue here with the fishbucket of the Universal Forwarder. I have tried to look for quite a lot of documentation, but it seems that there is too little documentation, and there are also few topics on it. The problem I am facing is that the fishbucket is taking up a large amount of space, about 2GB on the hard drive, while the limit configuration in limits.conf is:  file_tracking_db_threshold_mb = 500  In some other topics, I read that the fishbucket can be up to 2 or 3 times larger than the configured limit. And this happens because of its backup mechanism with file save and snapshot.tmp However, is there a limit to the size of the fishbucket? Will it continue to expand over time without limit, or only expand to a certain limit? PS:  i have nmon TA install on my server. Please, provide me with Splunk documentation on this part. Thank you.    
Hi Team, I have two event , attaching screenshot for reference 1.how to retrieve the uniqObjectIds and display in table form 2.how to retrieve the objectIds,version and display their value in diff... See more...
Hi Team, I have two event , attaching screenshot for reference 1.how to retrieve the uniqObjectIds and display in table form 2.how to retrieve the objectIds,version and display their value in different table column form first event: msg: unique objectIds name: platform-logger pid: 8 uniqObjectIds: [ [-]      275649     108976    ]    uniqObjectIdsCount: 1 second event:  event: { [-] body: { "objectType": "material", "objectIds": [ "275649" ], "version": "latest" } msg: request body The query i came closest is below but still unable to get what i wanted. Actual : Expected: in a table , i get the each object in different row .ex |uniqueIds| |275649| ||108976     index="" source IN ("") | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | eval split_field= split(_raw, "Z\"}") | mvexpand split_field | rex field=split_field "objectIdsCount=(?<objectIdsCount>[^,]+)" | rex field=split_field "uniqObjectIdsCount=(?<uniqObjectIdsCount>[^,]+)" | rex field=split_field "recordsCount=(?<recordsCount>[^,]+)" | rex field=split_field "sqsSentCount=(?<sqsSentCount>[^,]+)"|where objectType="material" | table_time,PST_TIME,objectType,objectIdsCount,uniqObjectIdsCount,recordsCount,sqsSentCount | sort _time desc      
Hi, What are the steps for setting up an email alert when SQL Server and SQL Agent services is down?
Hi There!    I would like to pass two values based on the selection of inputs in multiselect drill down,    Assume I'm having Multiselect options as v1, v2, v3, v4    Based on the selection Eg.... See more...
Hi There!    I would like to pass two values based on the selection of inputs in multiselect drill down,    Assume I'm having Multiselect options as v1, v2, v3, v4    Based on the selection Eg. If v1 and v2 were selected, I would like to pass "value1" "value 2" in "OR" condition to a token of a base search. Thanks in Advance!
Hello, I've set up an identity lookup using ldapsearch - it creates an identity of "username" that contains various details about the user, including the email address. It works well in identifying ... See more...
Hello, I've set up an identity lookup using ldapsearch - it creates an identity of "username" that contains various details about the user, including the email address. It works well in identifying the user as `username` and `useremail@domain'. However I'd like to also have it identify users based on `domain\username` and `username@domain' (which is actually different than `useremail` in our case) since a lot of our logs contain the user field in those formats. What's the best way to do that? 
Dear All, I have look up file with Transaction details and Transaction Name Like below. Will be great if someone suggest hot to handle below scenario.  Tran_lookup    Transaction_Details ABC     S... See more...
Dear All, I have look up file with Transaction details and Transaction Name Like below. Will be great if someone suggest hot to handle below scenario.  Tran_lookup    Transaction_Details ABC     Shopping CDE    Rent From my splunk index i am running Stats command like below (Tran from index = Tran_lookup) from  count(Tran) as count , Avg( responstime) as avgrt by Tran  I need to add matching Transaction_Details from lookup  to the final stats results: Current Results: Tran   Count avgrt Required Results (Matching Transaction_Details  to be pulled based on Tran )  Tran Transaction_Details  Count avgrt
Hello, I am looking to use Splunk free edition to teach students about searching through logs. I plan on setting up Splunk within a virtual environment, generating logs, and then exporting the data. ... See more...
Hello, I am looking to use Splunk free edition to teach students about searching through logs. I plan on setting up Splunk within a virtual environment, generating logs, and then exporting the data. Then having students install Splunk on their own machines and import the generated data.  On the free edition, it states "Are you planning to ingest a large (over 500 MB per day) data set only once, and then analyze it? The Splunk Free license lets you bulk load a much larger data sets up to 2 times within a 30 day period". My question- What is the maximum data that can be imported at a single time? Although the virtual environment will be small, only a few workstations and servers, I am worried that the sample data sets I generate might be too large. Thank you