All Topics

Top

All Topics

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re highlighting a new video that shows you all the ways Lantern can help you to achieve success. We’ve also published a new section of our Use Case Explorer for the Splunk Platform with brand new use cases relevant for energy sector customers. And as usual, we’re also sharing the rest of the new articles we’ve published this month. Read on to see what’s new. Lantern: Lighting Your Success with Splunk Did you know that Lantern holds nearly a thousand different articles for users of the core platform, plus premium Security and Observability products? Our articles cover everything from the basics of getting started with Splunk for newer users, to more advanced tips to help you work with Splunk like a pro, all the way through to the guidance provided by the Splunk Success Framework to help you operate Splunk as a program in your organization. Whether you’re a user or an admin, new or experienced, and whatever your goals, we’re confident that Lantern has helpful guidance for you. Watch our new 5-minute video for an overview of all of our different types of articles to get up to date with where to find articles that’ll help take your Splunk usage to the next level. Platform Use Cases for Energy Customers The Use Case Explorer for the Splunk Platform helps you develop new use cases using either Splunk Enterprise or Splunk Cloud Platform. The Explorer gives you an easy way to access use cases that are especially relevant for particular industries, such as Finance, Healthcare, Public Sector and more. We’ve just updated the Use Case Explorer with a new section for Energy sector customers. This section contains a number of use cases with searches that are specific to Operational Technology environments, allowing you to improve the security of these environments and ensure compliance with key legislation.  If you’re an energy customer, be sure to bookmark this page - we’ll be adding to it over the coming weeks with more energy-specific content, including new guidance on using Splunk Edge Hub with energy meters. Let us know what you think and what other use cases you’d like to see by dropping a comment below! Everything Else New This Month Here are all of the new articles that we’ve published this month: Protecting Operational Technology (OT) environments Reducing PAN and Cisco security firewall logs with Splunk Edge Processor Detecting Operational Technology assets communicating with external systems Using the OT Security add-on for Splunk to ensure NERC CIP compliance Sharing data between Splunk IT Service Intelligence and Splunk Enterprise Security Using Splunk DataSense Navigator Safeguarding Workload Management operation during the transition to cgroups v2 We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. ... See more...
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. Is there any way to access this information or the only way is to run a query like index=main |fields _raw    
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of licens... See more...
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of license) is not provided by user, it stops working or continues with reduced features? Where can I get any instruction on how to do this? If possible, can such an App be published at Splunkbase? best regards Altin
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra spac... See more...
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra space and special characters from the header but still facing this issue. Along with that I had tried saving as different format of csv's line utf-8, csv-ms doc etc., but NO LUCK 
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we ta... See more...
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we take to resolve this issue? Thanks
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this... See more...
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this activity? Thanks.
Hello there, we use search filters on our role management concept. It works fine but we got stuck on the following problem: Since some of hour hosts have a physical hostname (srv1, srv2, srv3,...)... See more...
Hello there, we use search filters on our role management concept. It works fine but we got stuck on the following problem: Since some of hour hosts have a physical hostname (srv1, srv2, srv3,...) and a virtual hostname (server1-db, server2-db, server3-db, server1-web, server2-web, server3-app), we had to use a lookup table (on the search heads) in order to have the virtual names mapped to the physical hostname (which are the names identified by the splunk forwarder). Our Lookup table look like this:     sys_name,srv_name srv1,server-db1 srv2,server-db2 srv3,server-web1 srv4,server-web2 srv5,server-app1 srv6,server-app2       my Role settings look like this:     [role_metrics_db] srchFilter = index=metrics AND (host=server-db* OR srv_name=server-db*) [role_metrics_web] srchFilter = index=metrics AND (host=server-web* OR srv_name=server-web*) [role_metrics_app] srchFilter = index=metrics AND (host=server-app* OR srv_name=server-app*)     Unfortunately my search filters do not recognize either the fields "sys_name" or "srv_name".  Should the search filters be done different? Does someone had the same challenge? Any help will be appreciated. Cheers! 
I'm creating a dashboard to easily search through our web proxy logs and table out the results when troubleshooting. The issue is that sometimes the logs don't contain a destination IP, sometimes the... See more...
I'm creating a dashboard to easily search through our web proxy logs and table out the results when troubleshooting. The issue is that sometimes the logs don't contain a destination IP, sometimes they do. For the dashboard fields that you can input, one of them I want to be able to specify sometimes is the destination IP (field: dest_ip), however, the field doesn't always exist so if I use the following search (I'm excluding the tabling): index=proxy c_ip=$cip$ cs_host=$cshost$ action=$action$ dest_ip=$destip$ Dashboard values: c_ip=1.2.3.4 cs_host=* (default) action=* (default) dest_ip=* (default) It will exclude some of the logs since they don't all have the field "dest_ip" The other 3 fields exist in all logs.  In the dashboard you can input values for each of the fields.  I'm trying to allow that for dest_ip but it doesn't always exist - that's the issue I'm trying to overcome.
Hi ,  I have snow data for change requests in splunk, I want to create a dashboard which gives the average duration of change request ( from actual start date and actual end date ) for type of the ... See more...
Hi ,  I have snow data for change requests in splunk, I want to create a dashboard which gives the average duration of change request ( from actual start date and actual end date ) for type of the change .  type of change can derived from short_description field.    On y-axis ( average duration ) and on x -axis ( type of change request( short_description) , I have written this query but this is not giving the average duration of change . The result which I am getting is too high , may be its calculating for all the events for same change number . Not sure .   index=servicenow short_description IN ("abc", "xyz", "123") | eval start_date_epoch = strptime(dv_opened_at, "%Y-%m-%d %H:%M:%S"), end_date_epoch = strptime(dv_closed_at, "%Y-%m-%d %H:%M:%S") | eval duration_hours = (end_date_epoch - start_date_epoch ) /3600 | eval avg_duration = round (avg_duration_hours, 0) | stats avg(duration_hours) as avg_duration by change_number, short_description | eventstats avg(avg_duration) as overall_avg_duration by short_description | eval ocb = round (overall_avg_duration ,0) | table short_description, ocb  
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not writ... See more...
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not write to downstream" "executioneid=*" both the spl uses the same index and they have the common field called executionid.  some execution ids are designed not to go to downstream application in the flow. I want to combine these two spl based on the executioneid  
How do I remediate this vulnerability? Tenable 164078  Upgrade Splunk Enterprise or Universal Forwarder to version 9.0 or later.
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, ev... See more...
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, even though, there is a lot of blank space. That makes the content not fit to the table and a scroll bar appears (an example what it looks like can be seen below) It does not happen all the time, only occasionally, I was not able to determine what it depends on. After sorting the table by one of the columns, everything goes back to normal, column widths become even and the content does not overflow anymore (example what the table should look like can be seen below) Note, that I have set a static width for the first column. I have tried removing it but that does not seem to help much. It seems like column widths still get messed up. Does anyone have any suggestions, what could be causing this? I would like to avoid setting static widths for all columns if possible because in some situations, the total number of columns can be different. I am using Splunk Enterprise v9.1.1
I have a sample log file from Apache, now how can I identify it with Splunk that this log is really an Apache log are there a tools or any method for that ?
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO Au... See more...
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO AuditLogger - Audit:[timestamp=01-04-2024 08:53:44.220, user=test_user, action=admin_all_objects, info=denied ] This issue is filling the index _audit very fast and I had to reduce the index size as a workaround but I doesn't resolve the problem. Have you ever have these problem in your enviroment?
i have fields value in events something like below. TOOL_Status description Event_ID Host_Name CLOSED 21alerts has been issued abc 2143nobi11 abc CLOSED 21alerts has been issued abc 2143nobi11 abc... See more...
i have fields value in events something like below. TOOL_Status description Event_ID Host_Name CLOSED 21alerts has been issued abc 2143nobi11 abc CLOSED 21alerts has been issued abc 2143nobi11 abc OPEN 21alerts has been issued abc 2143nobi11 abc OPEN 21alerts has been issued 111 2143nobi12 111 CLOSED 21alerts has been issued 111 2143nobi12 111 CLOSED 21alerts has been issued xyz 2143nobi15 xyz CLOSED 21alerts has been issued xyz 2143nobi15 xyz CLOSED 21alerts has been issued xyz 2143nobi15 xyz If TOOL_Status=OPEN & TOOL_Status=CLOSED both exist for similar event ID than create new field new_status=1 and should be ignored if only TOOL_Status=CLOSED TOOL_Status exist for similar event ID .   
We are looking for API request which fetch the audit logs/events performed by users in various application
Hi all, I have created an search which returns set of email address and few hosts and using table command to display that. Result looks like below: Hostname Agent Version Email host1 ... See more...
Hi all, I have created an search which returns set of email address and few hosts and using table command to display that. Result looks like below: Hostname Agent Version Email host1 1.0 test1@gmail.com host2 2.0 test2@gmail.com host3 2.0 test1@gmail.com host4 2.0 test1@gmail.com   Now , I want to send separate emails to test1@gmail.com and test2@gmail.com. The email should only contain hosts belonging to them. i.e host1, host3, host4 and its agent version should go to test1@gmail.com and host2 should go to test2@gmail.com I want to embed a link in the alert email body that redirects to search result and should contain hostnames that belong to particular recepient. Can anyone help me how to generate dynamic alert link ? Regards, PNV
I have the following transforms.conf file: [pan_src_user] INGEST_EVAL=src_user_idx=json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") and... See more...
I have the following transforms.conf file: [pan_src_user] INGEST_EVAL=src_user_idx=json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") and props.conf file: [pan:traffic] TRANSFORMS-pan_user = pan_src_user user_ip_mapping.csv file sample: src_ip src_user 10.1.1.1 someuser   However it's not working - not sure what I'm doing wrong? The src_user_idx field is not showing up in any of the logs
Hi, I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I... See more...
Hi, I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I have hundreds of searches and reports with extended lifetime (7days or more), will there be any impact to the hardware resources when Splunk holds so much data for these reports and searches?  
Hi,  Anyone know a summary index used by Splunk to retain the index sizes? I can calculate a index size by using internal index but I need to go back further than the last month.  Any other metho... See more...
Hi,  Anyone know a summary index used by Splunk to retain the index sizes? I can calculate a index size by using internal index but I need to go back further than the last month.  Any other method is welcomed as well.  Thanks