All Topics

Top

All Topics

Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see... See more...
Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see the dashboard working fine  2 - If i select time range last 20 days i can the dashboard is not working 3 - Started trouble shooting the issue and found the below  Spl query The below works fine when the time range is last 30 days  working - index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" Note- The same spl query dont work when time range is last 20 days.  Trouble shooting - Splunk receiving data till date however i have notice few thing,  When i select last 30 days i can see the by fields in the search  UPS Name , UPS Model , Runtime Remaining , Source When i select last 20 days the below fields are missing not sure why?  Missing fields - UPS Name , UPS Model , Runtime Remaining , Source . So the below SPL query is not showing any data  index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" -  |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" The highlighted part not pulling any data due to missing field.   Thanks 
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloa... See more...
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloalto firewall is in US/Pacific time Zone.  how can I check the Splunk timezone. and how can i configure it same on both the side.  #splunktimeZone
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem ... See more...
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem to get it to analyze. Can anyone help me?
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss addi... See more...
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss adding a license, but ingestion does not stop
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added... See more...
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added port 997 under configure receiving - Added Index, named it Sysmonlog.  VM2: Act as client: Windows 10 IP: 192.168.0.34 - Installed Sysmon - installed Splunk Forwarder - set the developer ip:192.168.0.34 port 8089 - set indexer 192.168.0.33 port 9997. ping result is successful form both VMs When I am about to add the forwarder in my indexer nothing shows up. how should I troubleshoot this to be able to add the forwarder?
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to p... See more...
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to point out that this version of OpenSSL may not yet be EOS for Splunk due to a purchase of an extended support contract; however, I have been unsuccessful in finding a documentation to support this. Please help provide this information or suggest how this finding can be addressed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/libcrypto.so Installed version : 1.1.1k Security End of Life : September 11, 2023 Time since Security End of Life (Est.) : >= 6 months  Thank you.
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/e... See more...
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/etc/system/local/inputs.conf with the following contents. ( the reason I do this is to make sure the host field for forwarded internal logs doesn't contain the FQDN like hostname in server.conf [default] host = <name of this host> When I get to the step where I run: splunk add cluster-member -current_member_uri https://current_member_name:8089 It works, but /opt/splunk/etc/system/local/inputs.conf is replicated from the current_member_name And, if I run something like: splunk set default-hostname <name of this host> ... it modifies inputs.conf on EVERY node of the cluster. Diving into this I believe this is happening because of the Domain Add-On DA-ESS-ThreatIntelligence which contains a server.conf file in it's default directory. (why this would be, I've no idea) contents of /opt/splunk/etc/shcluster/apps/DA-ESS-ThreatIntelligence/default/server.conf on our Cluster Deployer - which is now delivered to all cluster members. [shclustering] conf_replication_include.inputs = true It seems to me that it's this stanza that is causing the issue. Am I on the right track? And why would DA-ESS-ThreatIntelligence be delivered with this particular config? Thank you.
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHE... See more...
First of all, hello everyone. I have a mac computer. I installed Splunk enterprise security on this Mac M1 computer. Then I wanted to install Splunk SOAR, but I could not install it due to centos/RHEL arm incompatibility installed on the virtual machine. Then I rented a virtual machine from azure and installed Splunk SOAR there. Splunk enterprise is installed on my local network. First, I connected Splunk Enterprise to SOAR by following the instructions in this video (https://www.youtube.com/watch?v=36RjwmJ_Ee4&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=2) and test connectivity gave successful results. Then I tried to connect SOAR to Splunk Enterprise by following the instructions in this video (https://www.youtube.com/watch?v=phxiwtfFsEA&list=PLFF93FRoUwXH_7yitxQiSUhJlZE7Ybmfu&index=3), but I had trouble connecting soar to Splunk because Splunk SOAR and Splunk Enterprise Security are on different networks. In the most common example I came across, SOAR and Splunk Enterprise Security are on the same network, but they are on different networks. What should I write to the host ip here when trying to connect SOAR? What is the solution? Thanks for your help.
can you create searches using the REST API in splunk cloud
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing t... See more...
Hi All, I have a somewhat unusual requirement (at least to me) that I'm trying to figure out how to accomplish. In the query that I'm running, there's a column which displays a number representing the number of months, i.e.: 24, 36, 48, etc.  What I'm attempting to do is take that number and create a new field which takes today's date and then subtracts the number of months to derive a prior date. For example, if the # of months is 36, then the field would display "08/29/2021" ; essentially the same thing that this is doing:  https://www.timeanddate.com/date/dateadded.html?m1=8&d1=29&y1=2024&type=sub&ay=&am=36&aw=&ad=&rec= I'm not exactly sure where to begin with this one, so any help getting started would be greatly appreciated. Thank you!
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_numbe... See more...
I have a subsearch [search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*)| spath serial output=serial_number| spath message output=message| spath model_number output=model| eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q")| eval before=keystone_time-10| eval after=_time+10| eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q")| table keystone_time, serial_number, message, model, after| I would like to take the after and serial fields, use these fields to search construct a main  search like search index="june_analytics_logs_prod" serial=$serial_number$ message=*glow_v:* earliest=$keystone_time$ latest=$after$| Each event yielded by the subsearch yields a time when the event occured I want to find events, matching the same serial, with messages containing "glow_v" within 10 seconds after each of the subsearch events  
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. ... See more...
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. https://docs.splunk.com/Documentation/Splunk/9.3.0/admin/Savedsearchesconf https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Wheretofindtheconfigurationfiles According to the config file precedence page, the priority of savedsearches is determined by the application/user context, it is a reverse lexicographic order. That is, the configuration from add-on B overrides the configuration from add-on A. I have savesearch defined in addon A (an addon from Splunkbase). There is a missing index call in the SPL. I created app B with savedsearches.conf. I created an identically named "stanza" there and provided a single parameter "search=". In the parameter I put a new SPL query that contains the paricula index call. I was hoping that my new add-in named "B" would override the search query in add-in A, but it didn't. Splunk reports that I have a duplicate configuration. I hope I described this in understandable way. I must be missing something.    
Wondering if there are any industry best practices and/or recommendation for  setting fileSizeGB AND fileCount thresholds when searching\detecting  data exfiltration over USB device with the help of ... See more...
Wondering if there are any industry best practices and/or recommendation for  setting fileSizeGB AND fileCount thresholds when searching\detecting  data exfiltration over USB device with the help of Proofpoint ITM events in Splunk. I know we all have diff levels of risk  as we try to limit the number of false positives to our SOC team.   We started out with eval fileSizeGB=(Total/1000000) | where fileSizeGB > 100 AND fileCount > 100.  These thresholds are yielding few detections\alerts so we know we need to lower.  You can prob guess any insider threat team would want fileSizeGB > 10 AND fileCount > 1.          Just trying to find happy medium for all so any best practices or suggestions appreciated.
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July... See more...
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July 1st. Next month rolls in.... 1st report Date is from 1st July~16th July 2nd report Date is from 17th July ~ 31 July and have it send two reports on the end of the beginning of the next month.  August 1st, ect... and so on.    
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, ... See more...
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, apiEndTime, search_,et, search_lt, search |search apiStartTime='ZERO_TIME' OR apiEndTime='ZERO_TIME' |convert ctime(search_*)     I get results with  apiStartTime as Empty apiEndTime as 'ZERO_TIME' search_et 07/31/2024 00:00:00 search_lt 08/29/2024 13:10:58   Thus, how do I interpret the above results and how do I modify the SPL to fetch correct results?   Thank you Taruchit
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks!... See more...
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks! #Dyntraceaddon  
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteon... See more...
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteone.com?suvathp001?443 What would be the regex to only extract suvathp001 Thanks again for your help, Tom    
In our last post, we mentioned that the 3 key pieces of observability – metrics, logs, and traces – provide invaluable backend system insight so that we can detect and resolve failures fast. But how ... See more...
In our last post, we mentioned that the 3 key pieces of observability – metrics, logs, and traces – provide invaluable backend system insight so that we can detect and resolve failures fast. But how can we work towards proactively preventing system failures before they ever surface to our customers?  Frontend monitoring provides insight into what users experience when interacting with our applications. Splunk provides solutions for Real User Monitoring (RUM) to monitor actual user interactions and Synthetic Monitoring to simulate end-user traffic. While both are critical to observability and Digital Experience Monitoring (DEM), we’ll start with Splunk Synthetic Monitoring in this post. Let’s explore how Splunk Synthetic Monitoring works and what it brings to our observability practice.  Synthetics Monitoring in Action Navigating to Synthetics from Splunk Observability Cloud, we land on the Overview page: Here we can see a list of all of our existing Synthetic tests. We can filter to look for a specific type of test, sort the list by clicking on the table headings, or create new tests and global variables (variables that are shareable throughout Browser and API tests – a good place to store login credentials, for example). If we select Add new test, we can specify which type of test we want to create: Browser test, API test, or Uptime test. Browser Tests Browser tests simulate the run of a workflow or set of requests that make up the user experience and continuously collect performance data on these requests. They can be executed from many devices and from a number of locations around the world to ensure that no matter how users access applications or where they access them from, they’ll experience consistent performance. They also create a cool filmstrip with screenshots and a video replay for easy viewing of all actions executed during the session and their results in the browser. Detectors can be configured to alert on any errors or latency encountered during the test run.  Here we’ve created a simple Browser test:  This runs in the AWS-hosted Splunk public locations of Frankfurt, London, and Paris on a desktop device with a standard network connection and a 1366 x 768 viewport. Our test runs every minute and runs through one location at a time (round-robins), rather than concurrently hitting all locations. This particular test doesn’t yet have detectors configured, but if we selected Create detector, we could easily setup detectors and alerts for if/when this test fails:  Browser tests can hit a single page or be transactional with multiple steps. You can use this to evaluate real customer transactions, like logging in to your site or buying a product. In this example test, we have 7 steps that execute different possible paths a user can take when interacting with our e-commerce website. The first step includes multiple transactions. The Home transaction goes to a specified URL for our site. The following Shop transaction executes the provided JavaScript to select a random product from a list of products sold on our site:  Then the product is added to a shopping cart, an order is placed, checkout is confirmed, and the test returns to keep browsing products:  You can see that the Place Order transaction contains multiple steps. Different actions and selectors are available within each step. Additionally, steps can be imported via JSON files generated from the Google Chrome Recorder, but we’ll go into this in a separate post.  One note: you aren’t limited to running these tests on your own sites. It’s possible to run Synthetic tests against other sites, like those of competitors, to benchmark your performance against them. API Tests API tests check the transactional functionality and performance of API endpoints. They verify that endpoints are up and running and return the correct data and response codes. Alerts can also be configured on API tests based on any part of the HTTP request or response.  When we create a new API test, we configure setup steps and request steps by selecting Add requests: Setup steps include the actions required to set up the request, and the request step is the actual body of the API request. Validation steps allow you to check the response body, run JavaScript on the response, save, or extract values.  Here we have a test that hits the Spotify API: In the first request we grant server-to-server access by providing a Spotify authorization token we have specified as a global variable for our Synthetic tests. This way, when the token changes, we don’t need to edit every test that uses it, just the variable. In the validation step, the $.access_token is extracted from the Spotify response and saved to a custom variable. This variable (custom.access_token) is then used in a subsequent request to search for a specific track name:  We can extract values from the response body in the validation step or do things like assert the response body or headers contain certain values, assert the response code is what we expect, etc.  Uptime Tests Uptime tests can either be HTTP tests or Port tests. They don’t parse HTML, load images, or run JavaScript but make a request and collect metric data on response time, response code, DNS time (for HTTP tests), and time to first byte. HTTP tests hit a specified URL or endpoint, and Port tests make a TCP (Transmission Control Protocol) or UDP (User Datagram Protocol) request to the specified server port.  Test History If you open up any of the Synthetic tests from the Overview page, you’ll land on the Test History page. Here you’ll find a summary of performance trends, Key Performance Indicators (KPIs), and recent run results:  At the top of the page, we have line graph charts showing data trends for the last day, 8 days, and 30 days. The bar chart summarizes the overall results for the given time period. The Performance KPI chart is a customizable visualization with adjustable settings. You can view test run details by selecting any point in the chart or by selecting a test run from the Recent run results table below the KPI chart.  Every Browser test run generates additional charts and metrics, and the interaction between the test runner and the site being tested is represented as a waterfall chart on the test run results page. Browser test run results also include a filmstrip screenshot of site performance and a video of the site loading in real time. This lets you see how the page responds in real-time and see exactly what a user trying to load your site would see. When there’s a problem, you can select the APM link next to the related network activity to jump directly into APM and see what in your backend services may have contributed to the issue. Browser tests capture 40+ metrics, (including core web vitals), that you can use to extend your view into site performance by configuring charts, dashboards, and detectors using these custom metrics.  Wrap Up You’re now ready to set up your first Browser, API, or Uptime test to find, fix, and proactively prevent performance issues that could affect key user transactions. Don’t yet have Splunk Observability Cloud? Try it out free for 14 days!  Resources Introduction to Splunk Synthetic Monitoring in Splunk Observability Cloud Introduction to Splunk Synthetic Monitoring What Is Splunk Synthetic Monitoring Why You Need Synthetic Monitoring
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line eve... See more...
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line events, which dump XML documents containing separate timestamps into log messages. Issue Because these multiline messages contain a timestamp within the body of the XML, and this becomes part of the body of the log message, Splunk is inserting events with "impossible" timestamps. For example an event will get indexed as happening in 2019 when this is actually a log event from 2024, which output an XML body containing an <example></example>  element which contains a 2019 timestamp, and part of this body is stored as a Splunk event from 5 years ago. Constraints I cannot modify the configuration of the Splunk indexer/search head/anything other than the Universal Forwarder that I control I do not have access to licensing to be able to run any Heavy Forwarders; I can only go from Universal Forwarder on hosts which I control directly to a HTTP Event Collector endpoint that I do not control I cannot (easily) change the log format to not dump these bodies. There is a long term ask on the team to fix up logging to be a) consistent and b) more ingest-friendly - but I'm looking for any interim solution that I can apply on the component I control directly, which is basically the Universal Forwarder only. Ideas? My only idea so far is a custom sourcetype which specifies the log timestamp format exactly including a regex anchor to the start of the line, and also reduces/removes the MAX_TIMESTAMP_LOOKAHEAD value to stop Splunk from looking past the first match - I believe this would mean that all the lines in an event would be considered correctly because the XML document would start with either whitespace or a < character. However my understanding is that this would require a change either to the indexer or to a Heavy Forwarder which I can't do. I'm looking for any alternatives this community can offer as a potential workaround until the log sanitization effort gets off the ground.
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__r... See more...
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__raw/servicesNS/nobody/ta-mdi-health-splunk/TA_microsoft_graph_security_add_on_for_splunk_microsoft_graph_security?output_mode=json&count=-1 HTTP/1.1" 500 303 "-" "   Thanks