All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I am trying to SUM the columns.  index="nzc-neel-uttar" source="http:kyhkp" | timechart span=1d count by Type | eval "New_Date"=strftime(_time,"%Y-%m-%d") _time Type-A ... See more...
Hello everyone, I am trying to SUM the columns.  index="nzc-neel-uttar" source="http:kyhkp" | timechart span=1d count by Type | eval "New_Date"=strftime(_time,"%Y-%m-%d") _time Type-A Type-B New_Date 20/07/2023 3 8 20/07/2023 21/07/2023 4 23 21/07/2023 22/07/2023 66 0 22/07/2023 23/07/2023 90 0 23/07/2023 24/07/2023 0 6 24/07/2023 25/07/2023 0 23 25/07/2023   Desired Output: New_Date Type-A Type-B Total 20/07/2023 3 8 11 21/07/2023 4 23 27 22/07/2023 66 0 66 23/07/2023 90 0 90 24/07/2023 0 6 6 25/07/2023 0 23 23   Please suggest Thanks
In Splunk SaaS Cloud, how would I get daily data ingest volume by indexes?
Hi There, I am currently trying to set up specific events to be sent to a separate index. The documentation on how to do this was quite confusing for me. I assume I am making a very obvious mistak... See more...
Hi There, I am currently trying to set up specific events to be sent to a separate index. The documentation on how to do this was quite confusing for me. I assume I am making a very obvious mistake. I can provide any necessary information, Any help would be appreciated, Jamie
I'm trying to create something that displays long term outages: any index that hasn't had traffic in the last hour. I've made heartbeat alerts that notify when outages occur, but they're limited to ... See more...
I'm trying to create something that displays long term outages: any index that hasn't had traffic in the last hour. I've made heartbeat alerts that notify when outages occur, but they're limited to an hour to save resources. After that hour, they drop off the face of the earth and aren't accounted for - this is okay for alerts, but not for a dashboard, where persistence is the goal. I'm trying to create a search that returns the names of any index that's had 0 logs in the last hour. I have this so far: | tstats count where [| inputlookup monitoringSources.csv | fields index | format] earliest=-1h latest=now() by index | where count=0 However, I know this doesn't work, as I have a dummy index name in that .csv file that doesn't exist. If I'm not mistaken, it should be returning the dummy index with a count of 0 (it does not). How could I do this without inflating the search time range past an hour?
Please share if any one have idea of severity (Warn/Critical) and violation status variable while Http rest API integration
Hi Splunkers, I need to show to some stakeholders the correlation searches that we have enabled and are aligned to the mitre att&ck framework. I've tried using the REST command and I can find a... See more...
Hi Splunkers, I need to show to some stakeholders the correlation searches that we have enabled and are aligned to the mitre att&ck framework. I've tried using the REST command and I can find all the annotations under "action.correlationsearch.annotations" field  but I would like to narrow it down to only mitre att&ck. Anyone knows how to get this search? 
Hi  Has anyone manage to replicate the depends functionality for showing/hiding a panel from the classic XML to the new Splunk Dashboarding studio My goal: My goal is to click on a single value... See more...
Hi  Has anyone manage to replicate the depends functionality for showing/hiding a panel from the classic XML to the new Splunk Dashboarding studio My goal: My goal is to click on a single value visualization on dashboard studio and set a token with the click which will then make another panel appear below it. This will then change if i click on another single value, changing the token value and displaying a different visualization below   this is the functionality im speaking about: The click on the single view should set the token  $tokenfordisplay$ Then on the view that should appear and disappear <panel depends="$tokenfordisplay$"> Thanks for any help 
Hi,  Against my corporate account I want to enable webhook action to get all responses against a query in my Java API which I want to consume furthur.  I want to know if splunk enterprise webhook w... See more...
Hi,  Against my corporate account I want to enable webhook action to get all responses against a query in my Java API which I want to consume furthur.  I want to know if splunk enterprise webhook will be a correct approach for it ? Also If I configure my API URL in splunk webhook alert, will I immediately getting payload from splunk or will i need to add the URL in allow list ?   Also, will I get complete payload against the query in my response or just certain fields ?
I have taken over a project from 2 colleagues to install and integrate VectraAI and Splunk. We have a Vectra X29 as Brain/Sensor running Cognito Detect 7.0.2. I have got the Vectra part up and ru... See more...
I have taken over a project from 2 colleagues to install and integrate VectraAI and Splunk. We have a Vectra X29 as Brain/Sensor running Cognito Detect 7.0.2. I have got the Vectra part up and running but have problems with getting data to Splunk. From Splunk representative I was recommended to use SC4S instead of sending the syslog data directly to Splunk which runs on W2019 Server platform (cannot install syslog-ng). SC4S runs on a CentOS Stream8 Server in a Podman Container. Now, for the Vectra specific part: 1) Should I use Cognito Stream to send syslog to SC4S and if yes in syslog or JSON (some documentation recommends this with Universal Forwarder for Splunk). JSON doesn’t seem to work as it is now. I have configured HEC forwarding from SC4S to Splunk as recommended by documentation. 2) Should I use Notifications=>Syslog to send syslog to SC4S and if yes in syslog or JSON? 3) Can I send directly to Splunk’s Vectra Stream App?   Both 1 and 2 seem to work for SC4S but there I bump into problems. Not sure what the problem is there. HEC forwarding from SC4S to Splunk is coming live as it should with correct setup and it forwards Vectra data (nothing else collected by SC4S) to Splunk or maybe it doesn't since I see in Splunk drop Events.   I have configured a filter for Vectra in /opt/sc4s/env_file : SC4S_LISTEN_VECTRA_NETWORKS_X_SERIES_TCP_PORT=9101 which should identify the data as Vectra originated but I’m not sure SC4S handles it correctly. Lack documentation on how to troubleshoot indexed data in SC4S plus how correctly configure the /opt/sc4s/env_file and any other files needed. Have configured all Indexes according the SC4S documentation.   In Splunk I can see incoming Events with action=drop 26/07/2023      - - syslog-ng 155 – [meta sequenceId=”16928”]http: handled by response_action; action=’drop’, url=’htps://x.x.x.x:8088/services/collector/event’, status_code=’400’, driver=’d_hec_fmxt#0’, location=’root generator dest_hec:5:5’ 12:19:03:144    Host = abcdlog2 | source = sc4s | sourcetype = sc4s:events 26/07/2023      - - syslog-ng 155 – [meta sequenceId=”16929”]Message(s) dropped while sending message to destination; driver=’d_hec_fmt#0’, worker_index=7’, time_reopen=’10’, batch_size=’1’ 12:19:03:144    Host = abcdlog2 | source = sc4s | sourcetype = sc4s:events Any advice would be appreciated.   Timo Krjukoff
Hi Team,  Is it possible to update the studio dashboard in the cloud instance in Realtime as and when the event comes to Splunk from s3 bucket (I am using SQS-based S3 inputs for the Splunk Add-on f... See more...
Hi Team,  Is it possible to update the studio dashboard in the cloud instance in Realtime as and when the event comes to Splunk from s3 bucket (I am using SQS-based S3 inputs for the Splunk Add-on for AWS) without refreshing the dashboard.  I would like to display an image like on and off in the dashboard based on the events coming in real time without refreshing it like 5 secs once or 10 secs once...  Thanks
what's the fastest way to import into KVStore? I have about 650 000 rows and import is slow over "Lookup File Editig" app. Import last Approx. few hours. Is there any faster way to import into K... See more...
what's the fastest way to import into KVStore? I have about 650 000 rows and import is slow over "Lookup File Editig" app. Import last Approx. few hours. Is there any faster way to import into KVStore. Or do I create new dummy index and then I import into index (will that be faster)
In the below graph i see values displayed on top of each bar. How do i remove them?      
Hello we've been advised to "Disable the ability to process ANSI escape codes in terminal applications" and I honestly don't know what that really means and can't find much guidance around that. ... See more...
Hello we've been advised to "Disable the ability to process ANSI escape codes in terminal applications" and I honestly don't know what that really means and can't find much guidance around that. It wouldn't be something we actively use but I just don't know if that's enabled by default or not. Can someone help me figure out how to ensure this is disabled? We have a simple standalone environment with data just coming in from universal forwarders reading local logs. Thanks in advance, Felix
Hi, I have created a cluster of 3 nodes, and I use Splunk rest api to perform a login, I regularly get a sessionkey after the login, but if in the next call I call a node other than the one tha... See more...
Hi, I have created a cluster of 3 nodes, and I use Splunk rest api to perform a login, I regularly get a sessionkey after the login, but if in the next call I call a node other than the one that logged me in, I get a 401 even if I pass the sessionkey in the call, How can I share login information between nodes? I am unable to run a sticky session between nodes
Hi Guys, I have installed machine agent on a linux server, and connected the agent to one on-premise controller (build 23.4.0-10019) via HTTP . The server appears in controller dashboard correctly, ... See more...
Hi Guys, I have installed machine agent on a linux server, and connected the agent to one on-premise controller (build 23.4.0-10019) via HTTP . The server appears in controller dashboard correctly, with data in some tables, such as properties, top 10 processes,.., etc. But metrics like Load average, CPU, memory, availability, network, disk usage are always empty. I checked machine-agent.log, there are ERRORs followed by exception like below: [system-thread-0] 26 Jul 2023 13:51:47,320 INFO SystemAgent - Using Agent Version [Machine Agent v23.1.0.3555 GA compatible with 4.4.1.0 Build Date 2023-01-24 11:02:32] ... ... [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Problem registering metrics: Fatal transport error while connecting to URL [/controller/instance/1679/metricregistration] [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Invalid metric registration response from Controller [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 WARN ManagedMonitorDelegate - Problem registering metrics with controller : java.lang.NullPointerException: null [AD Thread-Metric Reporter1] 26 Jul 2023 14:39:14,547 ERROR ManagedMonitorDelegate - Error registering metrics com.singularity.ee.agent.commonservices.metricgeneration.metrics.MetricRegistrationException: Error registering metrics with controller. Response:null at com.singularity.ee.agent.commonservices.metricgeneration.AMetricSubscriber.doRegisterMetrics(AMetricSubscriber.java:308) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.AMetricSubscriber.registerMetrics(AMetricSubscriber.java:156) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.MetricGenerationService.registerMetrics(MetricGenerationService.java:329) ~[shared-22.10.0.34287.jar:?] at com.singularity.ee.agent.commonservices.metricgeneration.MetricReporter.run(MetricReporter.java:101) [shared-22.10.0.34287.jar:?] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) [agent-22.12.0-173.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_362] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:119) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:206) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:236) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) [agent-22.12.0-173.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) [agent-22.12.0-173.jar:?] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_362] Caused by: java.lang.NullPointerException Is this why some metrics are missing in controller? What might be the reason that only the metricregistration URI timeout but other data is sent to controller correctly?  Any clue is greatly appreciated. 
I have the following search to track search usage, i have a list of user who i want to track in a csv file. However, how do I add values that are in csv but not in my base search?  As in my base sear... See more...
I have the following search to track search usage, i have a list of user who i want to track in a csv file. However, how do I add values that are in csv but not in my base search?  As in my base search some users are have 0 count.  im getting the following: username|count userA|100 userB|200 I would like to add missing user name in the lookup to my results like below: username|count userA|100 userB|200 userC| 0 userD| 0   | tstats `summariesonly` count from datamodel=Splunk_Audit.Search_Activity where (Search_Activity.info="granted" OR (Search_Activity.info="completed" Search_Activity.search_type="subsearch")) by Search_Activity.user | rename Search_Activity.* as * | sort + count | lookup ess_analyst_list.csv username as user OUTPUT username as users | where !isnull(users) | fields - users          
I have splunk instance with 9.0.3 version and my splunk keeps throwing error in Forwarder Ingestion Latency with Root Cause " Ingestion_latency_gap_multiplier' indicator exceeds configured value. Obs... See more...
I have splunk instance with 9.0.3 version and my splunk keeps throwing error in Forwarder Ingestion Latency with Root Cause " Ingestion_latency_gap_multiplier' indicator exceeds configured value. Observed value is 2587595". does anyone know how to solve this problem?  
Hello I have created an email alert and the recipients are pulled from the search results.  Below is the part of the search query | stats count by CreateTimestamp  CCRecipients | rename CCRecipien... See more...
Hello I have created an email alert and the recipients are pulled from the search results.  Below is the part of the search query | stats count by CreateTimestamp  CCRecipients | rename CCRecipients as Recipients | table CreateTimestamp Recipients | fields - count and in the email alert configuration i added this $result.Recipients$ under the CC field Now without having Recipients in the search results, i cannot parse it in the CC field. So now when the mail gets triggered, the values in Recipients are getting displayed as part of the table, now do i mask this from appearing in the email?
Failed to check broker status LRS.Http.HttpUnauthorizedException: Unauthorized at LRS.PersonalPrint.Service.Areas.User.Services.Gateway.HttpClientExceptionWithResponseDelegatingHandler.SendAsync(Ht... See more...
Failed to check broker status LRS.Http.HttpUnauthorizedException: Unauthorized at LRS.PersonalPrint.Service.Areas.User.Services.Gateway.HttpClientExceptionWithResponseDelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at LRS.Http.RefreshableAccessTokenDelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at Microsoft.Extensions.Http.Logging.LoggingScopeHttpMessageHandler.SendAsync(HttpRequestMessage , CancellationToken ) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage , HttpCompletionOption , CancellationTokenSource , Boolean , CancellationTokenSource , CancellationToken ) at LRS.Http.JsonWebServiceClient.SendRequestAsync(HttpMethod method, String path, IEnumerable`1 queryParams, HttpContent content, CancellationToken cancellationToken, HttpCompletionOption completionOption) at LRS.Http.JsonWebServiceClientExtensions.GetObjectAsync[TObject](IJsonWebServiceClient client, String path, IEnumerable`1 queryParams, CancellationToken cancellationToken, IJsonObjectSerializer overrideSerializer) at LRS.PersonalPrint.Service.Areas.Sys.Services.Gateway.Broker.GatewayBrokerService.CheckBrokerStatusAsync(CancellationToken cancellationToken)