All Topics

Top

All Topics

Hello we've been advised to "Disable the ability to process ANSI escape codes in terminal applications" and I honestly don't know what that really means and can't find much guidance around that. ... See more...
Hello we've been advised to "Disable the ability to process ANSI escape codes in terminal applications" and I honestly don't know what that really means and can't find much guidance around that. It wouldn't be something we actively use but I just don't know if that's enabled by default or not. Can someone help me figure out how to ensure this is disabled? We have a simple standalone environment with data just coming in from universal forwarders reading local logs. Thanks in advance, Felix
Hi, I have created a cluster of 3 nodes, and I use Splunk rest api to perform a login, I regularly get a sessionkey after the login, but if in the next call I call a node other than the one tha... See more...
Hi, I have created a cluster of 3 nodes, and I use Splunk rest api to perform a login, I regularly get a sessionkey after the login, but if in the next call I call a node other than the one that logged me in, I get a 401 even if I pass the sessionkey in the call, How can I share login information between nodes? I am unable to run a sticky session between nodes
Hi Guys, I have installed machine agent on a linux server, and connected the agent to one on-premise controller (build 23.4.0-10019) via HTTP . The server appears in controller dashboard correctly, ... See more...
Hi Guys, I have installed machine agent on a linux server, and connected the agent to one on-premise controller (build 23.4.0-10019) via HTTP . The server appears in controller dashboard correctly, with data in some tables, such as properties, top 10 processes,.., etc. But metrics like Load average, CPU, memory, availability, network, disk usage are always empty. I checked machine-agent.log, there are ERRORs followed by exception like below: [system-thread-0] 26 Jul 2023 13:51:47,320 INFO SystemAgent - Using Agent Version [Machine Agent v23.1.0.3555 GA compatible with 4.4.1.0 Build Date 2023-01-24 11:02:32] ... ... [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Problem registering metrics: Fatal transport error while connecting to URL [/controller/instance/1679/metricregistration] [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Invalid metric registration response from Controller [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Problem registering metrics with controller : java.lang.NullPointerException: null [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 ERROR ManagedMonitorDelegate - Error registering metrics com.singularity.ee.agent.commonservices.metricgeneration.metrics.MetricRegistrationException: Error registering metrics with controller. Response:null at com.singularity.ee.agent.commonservices.metricgeneration.AMetricSubscriber.doRegisterMetrics(AMetricSubscriber.java:308) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.AMetricSubscriber.registerMetrics(AMetricSubscriber.java:156) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.MetricGenerationService.registerMetrics(MetricGenerationService.java:329) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.MetricReporter.run(MetricReporter.java:101) [shared-22.10.0.34287.jar:?] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) [agent-22.12.0-173.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_362] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:119) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:206) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:236) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) [agent-22.12.0-173.jar:?] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362] Caused by: java.lang.NullPointerException Is this why some metrics are missing in controller? What might be the reason that only the metricregistration URI timeout but other data is sent to controller correctly?  Any clue is greatly appreciated. 
I have the following search to track search usage, i have a list of user who i want to track in a csv file. However, how do I add values that are in csv but not in my base search?  As in my base sear... See more...
I have the following search to track search usage, i have a list of user who i want to track in a csv file. However, how do I add values that are in csv but not in my base search?  As in my base search some users are have 0 count.  im getting the following: username|count userA|100 userB|200 I would like to add missing user name in the lookup to my results like below: username|count userA|100 userB|200 userC| 0 userD| 0   | tstats `summariesonly` count from datamodel=Splunk_Audit.Search_Activity where (Search_Activity.info="granted" OR (Search_Activity.info="completed" Search_Activity.search_type="subsearch")) by Search_Activity.user | rename Search_Activity.* as * | sort + count | lookup ess_analyst_list.csv username as user OUTPUT username as users | where !isnull(users) | fields - users          
I have splunk instance with 9.0.3 version and my splunk keeps throwing error in Forwarder Ingestion Latency with Root Cause " Ingestion_latency_gap_multiplier' indicator exceeds configured value. Obs... See more...
I have splunk instance with 9.0.3 version and my splunk keeps throwing error in Forwarder Ingestion Latency with Root Cause " Ingestion_latency_gap_multiplier' indicator exceeds configured value. Observed value is 2587595". does anyone know how to solve this problem?  
Hello I have created an email alert and the recipients are pulled from the search results.  Below is the part of the search query | stats count by CreateTimestamp  CCRecipients | rename CCRecipien... See more...
Hello I have created an email alert and the recipients are pulled from the search results.  Below is the part of the search query | stats count by CreateTimestamp  CCRecipients | rename CCRecipients as Recipients | table CreateTimestamp Recipients | fields - count and in the email alert configuration i added this $result.Recipients$ under the CC field Now without having Recipients in the search results, i cannot parse it in the CC field. So now when the mail gets triggered, the values in Recipients are getting displayed as part of the table, now do i mask this from appearing in the email?
Failed to check broker status LRS.Http.HttpUnauthorizedException: Unauthorized at LRS.PersonalPrint.Service.Areas.User.Services.Gateway.HttpClientExceptionWithResponseDelegatingHandler.SendAsync(Ht... See more...
Failed to check broker status LRS.Http.HttpUnauthorizedException: Unauthorized at LRS.PersonalPrint.Service.Areas.User.Services.Gateway.HttpClientExceptionWithResponseDelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at LRS.Http.RefreshableAccessTokenDelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at Microsoft.Extensions.Http.Logging.LoggingScopeHttpMessageHandler.SendAsync(HttpRequestMessage , CancellationToken ) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage , HttpCompletionOption , CancellationTokenSource , Boolean , CancellationTokenSource , CancellationToken ) at LRS.Http.JsonWebServiceClient.SendRequestAsync(HttpMethod method, String path, IEnumerable`1 queryParams, HttpContent content, CancellationToken cancellationToken, HttpCompletionOption completionOption) at LRS.Http.JsonWebServiceClientExtensions.GetObjectAsync[TObject](IJsonWebServiceClient client, String path, IEnumerable`1 queryParams, CancellationToken cancellationToken, IJsonObjectSerializer overrideSerializer) at LRS.PersonalPrint.Service.Areas.Sys.Services.Gateway.Broker.GatewayBrokerService.CheckBrokerStatusAsync(CancellationToken cancellationToken)
I have a search that has "index=A", "Source=A", "Source=B" and both sources have the column "Address" I want to compare what is in source A but not in Source B based on the Values "Address" 
I have a base search and I want to set the token based on the result count. When there is no search result I want to show the message "Search returns no data" on the dashboard. I tried the below way... See more...
I have a base search and I want to set the token based on the result count. When there is no search result I want to show the message "Search returns no data" on the dashboard. I tried the below way, Message doesn't show up when there is 0 search result. But if I set the token and add this message on top where its "'job.resultCount'>0, the message shows up on the dashboard. ``` <search id="baseSearch"> <done> <condition match="'job.resultCount' > 0"> <unset token="check"></unset> </condition> <condition> <set token="check">Search returns no data</set> </condition> </done> <query> ... </query> </search> <row> <panel depends="$check$"> <html> <h2>$check$</h2> </html> </panel> </row> ```
index=abc sourcetype=app_logs |stats count as events by host, host_ip |where events >0  When i schedule this as alert  i am receiving alert only when there is no data in all the hosts, but  i n... See more...
index=abc sourcetype=app_logs |stats count as events by host, host_ip |where events >0  When i schedule this as alert  i am receiving alert only when there is no data in all the hosts, but  i need to get an alert if there is no data from any ONE host as well how can i do this???
I want to create a table in Dashboard Studio that will open up another dashboard when the user clicks on a row in the table.  However, I cannot figure out how to provide the name of the dashboard to ... See more...
I want to create a table in Dashboard Studio that will open up another dashboard when the user clicks on a row in the table.  However, I cannot figure out how to provide the name of the dashboard to link to in a hidden field (see code below).  How do I create a table drilldown to a dashboard based on the name in a hidden column?   { "visualizations": { "viz_15E2DDQP": { "type": "splunk.table", "dataSources": { "primary": "ds_HsTRSmYx" }, "title": "Fleet Status" } }, "dataSources": { "ds_HsTRSmYx": { "type": "ds.search", "options": { "query": "| makeresults\r\n| eval ITEM = \"Operator Logins\"\r\n| eval STATUS = \"Good\"\r\n| eval _hot_link = \"operator_logins_dashboard\"\r\n| append [\r\n| makeresults\r\n| eval ITEM = \"Revenue Service\"\r\n| eval STATUS = \"Fair\"\r\n| eval _hot_link = \"revenue_service_dashboard\"\r\n]\r\n| append [\r\n| makeresults\r\n| eval ITEM = \"Announcements\"\r\n| eval STATUS = \"Poor\"\r\n| eval _hot_link = \"announcements_dashboard\"\r\n]\r\n| append [\r\n| makeresults\r\n| eval ITEM = \"Navigation\"\r\n| eval STATUS = \"Warning\"\r\n| eval _hot_link = \"navigation_dashboard\"\r\n]\r\n| append [\r\n| makeresults\r\n| eval ITEM = \"Available Resources\"\r\n| eval STATUS = \"Error\"\r\n| eval _hot_link = \"resources_dashboard\"\r\n]\r\n| table ITEM STATUS _hot_link" }, "name": "Fleet Status" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "display": "auto-scale" }, "structure": [ { "item": "viz_15E2DDQP", "type": "block", "position": { "x": 0, "y": 0, "w": 690, "h": 260 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Dashboard Studio Hotlink POC" }  
Splunk ES documentation https://docs.splunk.com/Documentation/ES/7.1.1/Admin/Downloadthreatfeed#Add_a_URL-based_threat_source  describes how to Add a URL-based threat source and it seems work even wi... See more...
Splunk ES documentation https://docs.splunk.com/Documentation/ES/7.1.1/Admin/Downloadthreatfeed#Add_a_URL-based_threat_source  describes how to Add a URL-based threat source and it seems work even with credential using POST. What if I have to use API Key instead of credentials? How to download Threat Intelligence from a remote API using API Keys? From  MCAP https://mcap.cisecurity.org/ for instance. Thank you for your time in advance.
[1pm PT / 4pm ET] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, August 9, 2023 at 1pm PT / 4pm ET.... See more...
[1pm PT / 4pm ET] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, August 9, 2023 at 1pm PT / 4pm ET.   Join our bi-weekly Office Hour series where technical Splunk experts answer questions and provide how-to guidance on a different topic every month! This is your opportunity to ask questions related to your specific GDI challenge or use case, including: How to onboard common data sources (AWS, Azure, Windows, *nix, etc.) Using forwarders Apps to get data in Data Manager (Splunk Cloud Platform) Ingest actions, archiving your data, and anything else you’d like to learn!   Please submit your questions below as comments in advance. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
my query: index=abd ("start app" AND "app listed") |rex field=_raw "APP:\s+(<application1>\S+)" |rex field=_raw "LLA:\s+\[?<dip>[^\]]+)." |dedup dip |chart over application1 |appendcols [|... See more...
my query: index=abd ("start app" AND "app listed") |rex field=_raw "APP:\s+(<application1>\S+)" |rex field=_raw "LLA:\s+\[?<dip>[^\]]+)." |dedup dip |chart over application1 |appendcols [|search index=abd ("POST /ui/logs" OR "POST /ui/data" OR "POST /ui/vi/reg") AND "state: complete" |rex field=_raw "APP: (?<application2>\w+)" |rex field=_raw "LLA:\s+\[?<dip>[^\]]+)." |dedup dip |chart over application2 i want output as shown below: HOW TO GET THIS?? application1 count application2 count L1 10 L1 15 M2 20 M2 4 L3 45 L3 100
Hello Team We have a UBA 3-nodes architecture. Unfortunately, SAML authentication is required. We added the SAML xml file under "Manage --> Settings" as suggested. The result is that UBA threw us... See more...
Hello Team We have a UBA 3-nodes architecture. Unfortunately, SAML authentication is required. We added the SAML xml file under "Manage --> Settings" as suggested. The result is that UBA threw us out of the platform with no chance to login anymore either way. We have tried to login with the standard UBA user as we have always done as per -- https://docs.splunk.com/Documentation/UBA/5.2.0/Admin/UBALogin -- . Again, this page is misleading  and there is no way to login to Splunk UBA anymore. So we tried to seek on docs.splunk.com for suggestions. Unfortunately, any Splunk documentation suggest to use the GUI to revert -- which is not possible -- and now we are at dead end. log.log under caspida is not revealing much.  2023-07-25 18:39:48.596 error: no permissions found for role(s): %s (user=%s), failing login 2023-07-25 18:39:48.596 error: No permissions found for the roles: undefined The error page -- https://splunkuba.apps.mediaset.it/saml/acs {"userError":true,"message":"No permissions are granted to this username."} but roles and users have been mapped properly. Does anyone know know how to revert the authentication by using the CLI? Does anyone know how to deploy SAML authentication ? Thanks.
Hi All, Issue : ITSI deployment failing in preproduction with error "Unterminated string starting at: line 1 column 11371852 (char 11371851)" The process of deployment for ITSI is that we have a St... See more...
Hi All, Issue : ITSI deployment failing in preproduction with error "Unterminated string starting at: line 1 column 11371852 (char 11371851)" The process of deployment for ITSI is that we have a Standalone box where we try to create our services KPIs etc . and take a backup of that service and restore the changes in pre prod environment . while doing so the restorations fail with an error "Unterminated string starting at: line 1 column 11371852 (char 11371851)" ITSI Version: 4.13.2 Splunk version: 9.0.4 Steps tried :  1. Took a backup of the existing service (example service1) in qa, deleted that service1, and restored service1 back to qa - received the same error. 2. Deleted the service and then restored the standalone backup and still failed with the same error  3. created a new test service and KPI on the standalone and restored it in qa and still failed with the same error. Requesting assistance on this issue. I have been trying multiple things and requesting some help on the same. I will provide any additional information if required.
This quarter, the Splunk Observability team is unveiling brand new capabilities to help you get ahead of your issues and provide high quality digital customer experiences. Read on to learn more about... See more...
This quarter, the Splunk Observability team is unveiling brand new capabilities to help you get ahead of your issues and provide high quality digital customer experiences. Read on to learn more about how you can do more with your Splunk investment and start improving your investigating and monitoring practices today! Better consolidation with Splunk Platform Unified Identity Experience between Splunk Platform and Observability Cloud - GA: You can now seamlessly access Splunk Cloud and Splunk Observability data with one same user identity! As you’re investigating an issue in Splunk Cloud Platform, you can maintain context and effortlessly navigate into Splunk Observability Cloud with our new single sign on feature. We’re also making it easy for Splunk admins to manage user data access by extending Splunk Cloud’s role-based access control in Splunk Observability Cloud, so that meeting internal compliance requirements is no longer a headache. For more info, take a look at our technical documentation here. Note: this feature is currently only available for new Observability customers. A version for current Observability customers will be available soon. The Splunk Distribution of the OpenTelemetry Collector as an Add-On for Splunk - Preview: If you're a Splunk customer who relies upon Splunk Deployment Server or other tools to manage your Technical Add-ons (TA) and .conf files, getting started with and using Splunk Observability Cloud just got a whole lot easier! You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any add-on. With this, you can quickly gain deep insight into the health, structure, and status of your technical infrastructure and services with Observability Cloud, and you can more easily manage your OpenTelemetry Collector agents at scale. More visibility of your environment Session Replay for RUM - Preview With Session Replay, a new capability for Splunk RUM, users can gain visibility into end-user impact with a video reconstruction of every user interaction, correlate replay with the session waterfall view of granular user session data to quickly debug issues and reduce MTTR, and protect end-user PII with built-in text and image redaction options. Register here for the preview! RUM and Synthetics Regional Expansion Good news! We’ve expanded the regional availability of our DEM products. RUM is now available in Australia while Synthetic Monitoring is accessible for our European users. Need a quick refresher on what these capabilities can do for you? Take a look at our technical documentation for RUM here and for Synthetics here. The Splunk Distribution of the OpenTelemetry Collector as an Add-On for Splunk - Preview: If you're a Splunk customer who relies upon Splunk Deployment Server or other tools to manage your Technical Add-ons (TA) and .conf files, getting started with and using Splunk Observability Cloud just got a whole lot easier! You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any add-on. With this, you can quickly gain deep insight into the health, structure, and status of your technical infrastructure and services with Observability Cloud, and you can more easily manage your OpenTelemetry Collector agents at scale. Chrome Script Importer - Preview: Splunk Synthetic Monitoring’s Chrome Script importer captures precise user actions and complex user flows across multiple pages to replicate interactions and generate test scripts covering various scenarios and user journeys. By simplifying and automating browser test creation, engineering teams can ensure tests resemble actual user experiences and gain a more reliable understanding of functionality and performance. Register here for the preview! Infrastructure Monitoring - Kubernetes Navigator Enhancements - GA:  Splunk delivers enhanced visibility and accelerated troubleshooting for Kubernetes environments with the latest enhancements to the Kubernetes navigator in Splunk Infrastructure Monitoring. While customers have always enjoyed the hierarchically-structured out-of-the-box Kubernetes monitoring solution with real-time metrics and cloud-native scale, the latest enhancements provide customers one-step, zero-configuration installation, detailed full-stack navigation experience, additional visibility into control planes for all layers of the Kubernetes systems and their hosted microservice workloads. For more info, take a look at our technical documentation here. APM Service Centric Views - Preview: APM Service Centric Views give engineers a deep understanding of service performance in one centralized view. Now, across every service, engineers can quickly identify errors or bottlenecks from a service’s underlying infrastructure, pinpoint performance degradations from new deployments, and visualize the health of every third party dependency. Click here to sign up for the private preview, and watch this brief video overview. More bang for your buck ITSI - Outlier detection - GA: Along with improved UI, machine learning driven outlier detection lets you exclude historical outliers from calculations to improve threshold accuracy. You can now analyze and tune KPI thresholds side by side with a historical view from thresholding dashboards in the Content Pack for Monitoring & Alerting. For more information, take a look at our technical documentation here. ITSI - ML-Adaptive threshold - Preview:  ML-assisted Adaptive Thresholding delivers one-click configuration of adaptive thresholds to drive even faster time to value and accuracy. Using state-of-the-art machine learning, underlying seasonality and patterns in the historic KPI data are identified to recommend optimized configurations for time policies, and threshold levels with corresponding severities. Sign up for the preview here.  ITSI - Service Sandbox - Preview: Splunk ITSI Service sandbox enables users to map services directly in the UI, reducing service decomposition time. In a pre-production environment, easily and quickly add, manage, and edit services, link service dependencies, and share with dependent teams, prior to publishing to production. This sandbox environment allows teams to experiment and ensure services won’t break, before they’re in production. Zero Configuration for OpenTelemetry (continuous improvements) -  Splunk OpenTelemetry Zero Configuration Auto-instrumentation automatically discovers and instruments your back-end applications to capture and report distributed traces to the Splunk Distribution of the OpenTelemetry Collector and then on to Splunk APM. This removes the need to deploy the language instrumentation agent and configure each service, and enables customers to start streaming their traces and monitor their distributed applications with Splunk APM in minutes. Splunk OpenTelemetry Zero Configuration Auto-instrumentation is available for Java applications running in Linux, Windows, and Kubernetes.  Splunk Observability Cloud - Service Level Objectives - Preview: With this new feature within Splunk Observability Cloud, available to users at no additional cost, customers can now track the reliability and performance of their services with Service Level Objectives (SLOs). This new feature gives customers the functionality to create and view SLOs throughout Splunk Observability out-of-the-box in an integrated experience to easily measure reliability and user experience and align business needs to engineering reliability goals. Sign up for the preview here. Try these capabilities today! If you’re already an Observability Cloud user you can get started today by following the links we’ve provided to documentation. For Splunk Cloud or Enterprise users, start an Observability Cloud trial today!
HI people, I want from a query to only print out the first n-characters of the field value. So:   index=someIndex sourcetype=someNetworkDevice | stats count by someField     The outpu... See more...
HI people, I want from a query to only print out the first n-characters of the field value. So:   index=someIndex sourcetype=someNetworkDevice | stats count by someField     The output goes:   someField this is a strong value 1 this is a string value 1a this is a string value 2 some other string value 1 some other string value 1a some other string value 2 this is yet another string value 1 this is yet another string value 1a etc.     I want to pull out say the first 10 characters in each row:   this is a this is a this is a some other some other some other this is yet this is yet etc  
Hi I need help to extract and to filter fields with rex and regex 1) i need to use a rex field on path wich end by ".exe" Example : in path C:\ProgramFiles\Toto\alert.exe in need to catch "aler... See more...
Hi I need help to extract and to filter fields with rex and regex 1) i need to use a rex field on path wich end by ".exe" Example : in path C:\ProgramFiles\Toto\alert.exe in need to catch "alert.exe" 2)i need to filter events which have a path in AppData\Roaming and which end by .exe I have done this but it doesnt works   | regex NewProcess=(?i)\\\\AppData\\\\Roaming\\\\[^\\\\]+\\.exe$"   Thanks