All Topics

Top

All Topics

Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, Splunk Education Training Units are no exception – always expiring one year from the date... See more...
Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, Splunk Education Training Units are no exception – always expiring one year from the date of purchase. Don’t let these slip by without using them to gain access to our valuable instructor-led training or eLearning with labs.  Training units are the currency of career advancement Insights from our latest Splunk Career Impact Survey proves that taking Splunk Education training is paying off. According to the survey results, very proficient practitioners of Splunk are 2.7 times more likely to get promoted, and those with Splunk certifications plus higher levels of Splunk proficiency reported earning approximately 131% more than their less-proficient peers. With more Splunk skills, you're on the way to future-proofing your career! Work with your org manager to use those training units If you’re a Splunk user for one of our customers, make sure you connect with the person in your company who manages the Splunk training units. Let them know that you would love to take some of those already paid-for training units off their hands so they don’t expire before they get used. Oh, but if you’re just a curious learner itching to get more Splunk courses under your belt and don’t officially use Splunk at your org, you can purchase training and register for class using a credit card. No problem!    Enroll in some classes today – Here’s what’s popular Here are just a few of the more popular Instructor-led Splunk Education courses to choose from: Intro to Splunk Using Fields Visualizations Working with Time Statistical Processing Comparing Values Result Modification And, remember that training units can also be used for eLearning with labs if you prefer more self-paced learning.  See you in class – even at the last minute In the next two weeks, there are still seats available via our Last Minute Learning program. You can quickly get into some popular virtual instructor-led training courses that have low enrollment – available for purchase via training units or credit card. This is a great opportunity to take those training units that might be expiring soon! If your company has purchased a contract of TUs, please consult with your Org Manager to enroll in these paid classes.    At Splunk Education, we get excited about helping our learners excel. So, go get those training units and we’ll see you in class!  -Callie Skokos on behalf of the Splunk Education team ___________________________________ Some fine print Training units expire at 12:01 AM Eastern Time on the expiration date. Registrations paid by training units must be placed before 12:01 AM Eastern Time. All instructor-led classes and dedicated classes paid by training units must start the day before the training units expire. Get all the details here. 
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it searc... See more...
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it search for every 1minute. for example  this should not fire an Alert because it recovered within the 5 min 1:00 Status = Down   (event result count X5) 1:03 Status = up 1:07 Status = Down  (event count X3) 1:10 Status = up 1:13 Status = up 1:16 Status = up for example  this should  fire an Alert  1:00 Status = Down  (event result count X1) 1:03 Status = Down (event result count X1) 1:07 Status = Down (event result count X1) 1:10 Status = up 1:13 Status = up 1:16 Status = up
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by... See more...
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by Storefront it returns with the correct number of counts.  The  fields are created in statistics with no counts or names of the the netscalers, site, or user.   The second search does not return any statistical results.  Hoping to see the count of connections to the Storefront and its correlating NetScaler in a Sankey diagram.     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, count as count_Netscaler ] | appendpipe [ stats count by site | rename site as source, count as count_site ] | appendpipe [ stats count by UserName | rename UserName as source, count as count_UserName ] | fields source, count_Netscaler, count_site, count_UserName | search source=*     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, Storefront as target ] | appendpipe [ stats count by site | rename site as source, Netscaler as target ] | appendpipe [ stats count by UserName | rename UserName as source, site as target ] | search source=* AND target=* | stats sum(count) as count by source, target | fields source, target, count
Intro In a Kubernetes environment, you can scale your application up or down with a simple command, a UI, or automatically with autoscalers. However, to scale successfully, you need to know when yo... See more...
Intro In a Kubernetes environment, you can scale your application up or down with a simple command, a UI, or automatically with autoscalers. However, to scale successfully, you need to know when you’re hitting scaling limits and if/when your scaling efforts are effective. Otherwise, you might continue to inefficiently use resources or hit application performance issues unnecessarily. In this post, we’ll check out Kubernetes Horizontal Pod Autoscaling (HPA), when you might use HPA, caveats you might hit when scaling pods, and how you can use Splunk Observability Cloud to gain insight into your Kubernetes environment to ensure you’re scaling efficiently and effectively.  Kubernetes Autoscaling Autoscaling is an awesome way to increase the capacity of your Kubernetes environment to match application resource demands with minimal manual intervention. With autoscaling, scalable resources automatically increase or decrease with variable demand. This creates a more elastic, more performant, and more efficient (both in terms of application resource consumption and infrastructure costs) Kubernetes environment. Kubernetes supports both vertical and horizontal scaling. With vertical scaling (up/down), resources like memory and CPU are adjusted in place (think increasing/decreasing memory for an existing workload). Whereas with horizontal scaling (in/out), the number of replicas increases or decreases (think increasing/decreasing the number of workloads). Vertical scaling is great for right-sizing your Kubernetes workflows to ensure they have the resources they need. Horizontal scaling is great for dynamically scaling to meet unexpected bursts or busts in traffic to distribute the load.  Horizontal and vertical autoscaling can be configured at the cluster and/or pod level using Cluster Autoscaling, Vertical Pod Autoscaling, and/or Horizontal Pod Autoscaling. The Horizontal Pod Autoscaler (HPA) is the only autoscaler included by default with Kubernetes, so we’ll keep our focus on HPA for now.  Horizontal Pod Autoscaling To scale a Kubernetes workload resource like Deployments or StatefulSets based on the current demand of resources, you can manually scale workloads, or you can automatically scale workloads through autoscaling. Scaling up or down automatically to match demand reduces the need for manual intervention and ensures efficient resource use within your Kubernetes infrastructure. If load increases, horizontal scaling will respond by deploying more pods. Conversely, if load decreases, the HorizontalPodAutoscaler will instruct the workload resources to scale down, as long as the number of pods is above the configured minimum.  Horizontal Pod Autoscaling Gotchas Automatically scaling pods is a hugely beneficial feature of Kubernetes, but there are some caveats when implementing Horizontal Pod Autoscaling. Here are some things to be aware of:  Metric lag: because the HorizontalPodAutoscaler continuously checks the Metrics API for resource usage in order to inform scaling behavior, there can be a lag between monitoring usage and scaling. HPA checks metrics every 15 seconds by default. Vertical scaling conflicts: VPA and HPA shouldn’t be used together when based on the same metrics – this can lead to competing and conflicting scaling decisions.  Resource limits: if requests and limits aren’t properly configured, HPA might not be able to scale out. Fine-tuning thresholds can be tricky and requires monitoring resource limits. Resource competition: new pods spinning up can compete for resources and can also take time to initialize and stabilize. Not all applications can easily scale horizontally (single-threaded applications, those with order-dependent queues, databases, etc.). Before implementing HPA, you need to determine application compatibility.  DaemonSets: HPA doesn’t apply to DaemonSets – if you want to scale your DaemonSet, you probably should scale your node pool instead.  Dependency bottlenecks: external dependencies (such as 3rd party APIs) might not scale at the same rate or at all – you should have a plan to scale those as well. Let’s HPA Now that we know what Horizontal Pod Autoscaling is and some things to be aware of when working with HPA, let’s see it in action.  We have a PHP/Apache Kubernetes deployment under the Apache namespace that is exporting OpenTelemetry data to Splunk Observability Cloud. Our deployment creates a new StatefulSet with a single replica. Let’s jump into the Splunk Observability Cloud Kubernetes Navigator, which we explored in a previous post.  In the Navigator, if we filter down to our cluster and the namespace Apache, we can see that we currently only have one pod in our node:  The pod is receiving some significant load, and for HPA example purposes, we have deliberately limited the resources for each Apache pod. We can see spikes in CPU and memory usage that are leading to insufficient resources: The lack of required resources is throwing containers into a CrashLoopBackOff. For a minute we’ll have 1 active container:  Then suddenly, that container will crash and we’ll have 0 active containers before it attempts to restart again: Not only can we see these containers starting and stopping in real-time, but the restarts triggered an AutoDetect detector that would have notified our team of an issue: The Kubernetes Navigator helped us identify our resource issues and the impacts they’re having on our containers, but now we need to resolve these issues. Let’s now set up Horizontal Pod Autoscaling so our workload will automatically respond to this increased load and scale out by deploying more pods. First, we’ll create our HPA configuration file under our ~/workshop/k3s/hpa.yaml directory:  The HorizontalPodAutoscaler object specifies the behavior of the autoscaler. You can control resource utilization, set the min/max number of replicas, specify the direction of scaling (up/down), set target resources to scale, etc. We’ll apply the configuration by running kubectl apply -f ~/workshop/k3s/hpa.yaml.  We can see that the autoscaler was created and we can validate Horizontal Pod Autoscaling with the kubectl get hpa -n apache command. Here’s what the response looks like:  Now that HPA is deployed, our php-apache service will autoscale when either the average CPU usage goes above 50% or the average memory usage for the deployment goes above 75% with a minimum of 1 pod and max of 4 pods. In the Kubernetes Navigator nodes view, we can validate that we now have 4 pods to handle the increased load. We’ve added a filter to highlight the 4 pods in the Apache namespace:  Looking at our K8s pods tab, we can see additional pod-level metrics and again verify the number of active pods is now 4.  If we wanted to increase the number of pods to 8, we could simply update our hpa.yaml and specify 8 maxReplicas. Once deployed, we can see we now have 8 active Apache pods:  After configuring our HorizontalPodAutoscaler, we can sit back and watch our container count remain steady as our pods autoscale to handle the increased traffic.  Wrap Up  If you’re interested in automatically scaling Kubernetes workloads to match increased load with minimal manual intervention, Horizontal Pod Autoscaling might be for you. Before you get started, watch out for some of those common gotchas we mentioned. To identify pods running heavy on resource utilization and where you might benefit from setting up HPA check out the Splunk Observability Cloud Kubernetes Navigator. Don’t have Splunk Observability Cloud? We got you. Start a Splunk Observability Cloud free trial! Ready to jump into the Kubernetes Navigator? Get started integrating Kubernetes and Splunk Observability Cloud!
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the ... See more...
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the following information: . Splunk Enterprise Term Non-Production License creation_time 2024-08-11 07:00:00+00:00 expiration_time 2025-02-11 07:59:59+00:00 features Acceleration AdvancedSearchCommands AdvancedXML Alerting ArchiveToHdfs Auth ConditionalLicensingEnforcement CustomRoles DeployClient DeployServer FwdData GuestPass KVStore LocalSearch MultifactorAuth NontableLookups RcvData RollingWindowAlerts SAMLAuth ScheduledAlerts ScheduledReports ScheduledSearch ScriptedAuth SigningProcessor SplunkWeb SubgroupId SyslogOutputProcessor     is_unlimited False label Splunk Enterprise Term Non-Production License max_violations 5 notes None payload None quota_bytes 53687091200.0 sourcetypes   stack_name enterprise status VALID type enterprise window_period 30   Splunk Forwarder creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features Auth DeployClient FwdData RcvData SigningProcessor SplunkWeb SyslogOutputProcessor hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD is_unlimited False label Splunk Forwarder max_violations 5 notes None payload None quota_bytes 1048576.0 sourcetypes   stack_name forwarder status VALID type forwarder window_period 30   Splunk Free creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features FwdData KVStore LocalSearch RcvData ScheduledSearch SigningProcessor SplunkWeb hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF is_unlimited False label Splunk Free max_violations 3 notes None payload None quota_bytes 524288000.0 sourcetypes   stack_name free status VALID type free window_period 30   I would like to experiment with Splunk Stream for capturing DNS records before implementing in our production environment. I have installed Splunk Stream 8.1.3 and most of the menu's within the app work, however when I go to Configuration > Distributed Forwarder Management it just displays a blank page. When i look at the splunk_app_stream.log I can see the following error   2024-08-15 14:51:58,543 ERROR rest_indexers:62 - failed to get indexers peer Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/rest_indexers.py", line 55, in handle_GET timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 612, in simpleRequest raise splunk.LicenseRestriction splunk.LicenseRestriction: [HTTP 402] Current license does not allow the requested action 2024-08-15 14:51:58,580 ERROR indexer:52 - failed to list indexers Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/splunk_app_stream/models/indexer.py", line 43, in get_indexers timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.InternalServerError(None, serverResponse.messages) splunk.InternalServerError: [HTTP 500] Splunkd internal error; [] Does this mean that the splunk dev license does not support Splunk Stream app?
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the m... See more...
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the multiselect values stay populated when a different event is selected. The search for variable will update the dropdown, though. Is there a way to reset the selected variables when a different event is selected? I have seen the simple xml versions for this but haven't seen any information on how to do this in dashboard stuido. Any help is greatly appreciated. { "visualizations": { "viz_Visualization": { "type": "splunk.line", "dataSources": { "primary": "ds_mainSearch" }, "options": { "overlayFields": [], "y": "> primary | frameBySeriesNames($dd2|s$)", "y2": "> primary | frameBySeriesNames('')", "lineWidth": 3, "showLineSmoothing": true, "xAxisMaxLabelParts": 2, "showRoundedY2AxisLabels": false, "x": "> primary | seriesByName('_time')" }, "title": "Visualization", "containerOptions": { "visibility": {} }, "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "type": "auto", "newTab": false } } ] } }, "dataSources": { "ds_dd1": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype |dedup EventName \n| sort str(EventName)" }, "name": "dd1Search" }, "ds_mainSearch": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype EventName IN (\"$dd1$\") VariableName IN ($dd2|s$) \n| timechart span=5m max(Value) by VariableName", "enableSmartSources": true }, "name": "mainSearch" }, "ds_dd2": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "index=index source=source sourcetype=sourcetype EventName = \"$dd1$\" |dedup VariableName \n| sort str(VariableName)" }, "name": "dd2Search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_dd1": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd1" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd1" }, "title": "Event Name", "type": "input.dropdown", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"EventName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"EventName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_dd2": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd2" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd2" }, "title": "Variable(s)", "type": "input.multiselect", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"VariableName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"VariableName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_Visualization", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 653 } } ], "globalInputs": [ "input_global_trp", "input_dd1", "input_dd2" ] }, "description": "", "title": "Test" }  
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. Thi... See more...
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. This my sample  | tstats summariesonly=true values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest) as dest values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.hostname) as hostname values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.os_type) as os_type values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.exploit_title) as exploit_title values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.malware_title) as malware_title from datamodel=Vulnerabilities_Custom.Vulnerabilities_Non_Remediation where nodename IN ("Vulnerabilities_Custom.Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.High_Or_Critical_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Medium_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Low_Or_Informational_Vulnerabilities_Non_Remediation") by Vulnerabilities_Custom.Vulnerabilities_Non_Remediation._time, Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest | table event_time dest hostname os_type exploit_title malware_title  Has anyone have clues about this?   
How can I constantly hit a http end point in a remote server to collect useful metrics and then import it to splunk hourly for example and use it for useful visualisations?
Is this a per profile basis? Per cluster basis? how does this restart back?  
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> ... See more...
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input>      I need to find a way to reference the base searches, depending on the input provided by the user. I was hoping to use a token to reference the base searches, but donesn`t seem to be working:     <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search base="$Shift_tok$"> <query>| table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors. ... See more...
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors.  My splunk was recently upgraded to 9.2.2, last tried on earlier version it was working.  Splunk Add-on on AWS version is 5.1.0. Can I check if anyone came across the same issue and managed to resolve it?
This is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_ti... See more...
This is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_time = mvcount(noMessageHandleCounter) * 1000 | fillnull no_msg_wait_time | rename duration as processing_time | eval _raw = mvindex(split(_raw, " "), -1) | rex "Done Bulk saving .+ used (?<db_bulk_write_time>\w+)" | eval processing_time = processing_time * 1000 | eval mq_read_time = processing_time - db_bulk_write_time - no_msg_wait_time | where db_bulk_write_time > 0 | rename processing_time as "processing_time(ms)", db_bulk_write_time as "db_bulk_write_time(ms)", no_msg_wait_time as "no_msg_wait_time(ms)", mq_read_time as "mq_read_time(ms)" | table _time, processing_time(ms), db_bulk_write_time(ms), no_msg_wait_time(ms), mq_read_time(ms), Count, _raw So now for processing_time(ms) column the calculation instead is starting from the 2 previous occurences of All Read threads finished flush the messages to Done bulk saving messages So in the example below: 2024-08-12 10:02:20,542 will have a processing_time from 10:02:19,417 to 10:02:20,542. 2024-08-12 10:02:19,417 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-12 10:02:20,526 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1  Count=1 2024-08-12 10:02:20,542 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 6 ms How can I also create a time series graph on same graph where x axis is time and then y axis is a bar chart of count column + line chart of new processing_time(ms) Raw log data looks something like: | makeresults | eval data = split("2024-08-07 21:13:07,710 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:07,710 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:08,742 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:08,742 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:09,757 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:09,757 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:10,773 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:10,773 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=7), retry in 1000 ms 2024-08-07 21:13:11,007 [15] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:4504 2024-08-07 21:13:11,132 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:11,257 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:11,382 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:11,507 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:11,632 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:11,757 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:11,882 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:12,007 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 113 ms 2024-08-07 21:13:12,007 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:12,054 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:12,132 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:12,179 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:12,257 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,398 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,528 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,778 [33] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:4668 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:12,825 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 24 ms 2024-08-07 21:13:12,841 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:12,934 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:12,966 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:13,059 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:13,059 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,184 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:13,200 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,325 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:13,341 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,466 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:13,466 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,466 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:13,591 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:13,716 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:13,841 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:13,966 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:14,481 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:14,481 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:15,497 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:15,497 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:15,731 [20] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:7648 2024-08-07 21:13:15,856 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:15,981 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:16,106 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:16,231 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:16,356 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:16,481 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:16,606 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:16,622 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 11 ms 2024-08-07 21:13:16,637 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:16,731 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:16,762 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:16,856 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:16,856 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:16,997 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,137 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,278 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,278 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:18,294 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:18,294 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:19,309 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:19,309 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:19,544 [28] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:13568 2024-08-07 21:13:19,669 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:19,794 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:19,919 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:20,044 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:20,169 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:20,294 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:20,419 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:20,434 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 12 ms" It looks something like this now _time processing_time Count db_bulk_write_time no_msg_wait_time _raw 2024-08-07 21:13:16.637 3.797 1 12 3000 2024-08-07 21:13:20,434 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 12 ms 2024-08-07 21:13:12.841 3.781 1 11 3000 2024-08-07 21:13:16,622 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 11 ms 2024-08-07 21:13:12.054 0.771 1 24 0 2024-08-07 21:13:12,825 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 24 ms 2024-08-07 21:13:07.710 4.297 1 113 4000 2024-08-07 21:13:12,007 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 113 ms
Hello,  I'm wondering if we can send the PaloAlto firewall logs to splunk *cloud* via HEC? We've done that once when evaluating other SIEM solution (crowd-strike NG-SIEM). As to splunk, the documen... See more...
Hello,  I'm wondering if we can send the PaloAlto firewall logs to splunk *cloud* via HEC? We've done that once when evaluating other SIEM solution (crowd-strike NG-SIEM). As to splunk, the documents I can find in Internet all recommend using this flow:  PaloAlto -> syslog-ng+universal forwarder -> splunk cloud. Does anyone know why HEC is not a preferred option in this case? any potential issue here? Regards, Iris
I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro... See more...
I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro arguments must only contain alphanumeric, '_' and '-' characters?    
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having =... See more...
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having = my dashboard is NOT updating correctly. It is set to refresh every 15 minutes but when it does this, it takes the dashboard out of full screen which I do not want (shows my tabs and apps rather than just the dashboard) Questions:--> How can I ensure when Splunk webpage refreshes through the browser, the dashboard is refreshed/reset in full screen? Thank you
We have a huge json array event, when I search for that event, search results shows a few missing values for a field. Any suggestion how to fix this issue, and have all values displayed for the field.
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months   ... See more...
Currently working on data retention log collection policy to meet M-21-31  and not sure if the below config would meet the requirement Current Requirement:   Hot: 6 months   Warm: 24 months    Cold:  18 months     Archive or Frozen: 18 months  with data ceiling and data deletion add these config to the Index Stanza to meet the above requirements If not please let me know what the setting and or config would look like  Index.conf  (add the below config to the Index Stanza)    maxHotSpanSecs = 15778476 - would provide around 6 months of hot bucket data     maxHotIdleSecs = 15778476 NOT sure about warm bucket setting to get 24 months of warm bucket data     coldPath.maxDataSizeMB = 47335428 - would provide around 18 months of cold bucket data     frozenTimePeriodInSecs = 47335428 - would provide around 18 months data archive / frozen data    coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive - send archive/froze to this location so it not deleted data     
We deeply believe that the best way to understand the impact of Splunk is by hearing your voice directly. So, we brought back the Splunk Love video booth this year at .conf to record your feedba... See more...
We deeply believe that the best way to understand the impact of Splunk is by hearing your voice directly. So, we brought back the Splunk Love video booth this year at .conf to record your feedback and love for Splunk! We greatly appreciate all of you sharing your stories with us. This is the first blog of the Splunk Love series. Please stay tuned for more to come!  The Power of a Single Word We wanted to capture the essence of what Splunk means to our customers in the simplest form possible: a single word. The results showcased the diverse ways in which Splunk is making a difference across industries. From "Flexible" to "Intelligence," each word reflects a unique value that Splunk brings to organizations. Featured Customer Words Thank you to all those who contributed! Here are some highlights:   Flexible & Organized Activision Intelligence “Jungle” King Future Marvelous Check here to see all the spotlighted Splunk Love response Check out this word cloud of all the responses we gathered at .conf24: Join the Conversation If you have further feedback and suggestions, please visit Splunk VOC to share your voice, ideas, join customer advisory boards and product preview programs! Your feedback is invaluable to us as we strive to provide the best experience for everyone. Cheers, Team Splunk
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23... See more...
Hello, I have time stamps that are not matching. How do I table the actual "Event log time stamp" ?   Splunk Time stamp Event log time stamp 8/14/24 4:29:21.000 AM 2024-08-13 17:49:23,006 [https-mmme-nio-1111-exec-2] ERROR
I have a csv with ip addresses. I would like to conduct a search for addresses that are NOT listed in that csv.  I was attempting the following but it does not render the results I was expecting. I... See more...
I have a csv with ip addresses. I would like to conduct a search for addresses that are NOT listed in that csv.  I was attempting the following but it does not render the results I was expecting. I want to search for ip addresses that are not in that list.           IE: unknown address...  Splunk Enterprise Security  index=myindex | rex "(?<ip>\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)" | sort ip | table ip NOT [inputlookup known_addresses.csv]