All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = ol... See more...
1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 300 index = wineventlog renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 300 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist3 = EventCode="5447" index = wineventlog renderXml=false
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | s... See more...
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | search lvl IN (Error, Warn) source="*testsource*" | chart count BY lvl| sort -count When I run the query it is showing result  lvl == warn = 304, error=5 . But in pie chart it is showing different count ->warn=325, error=7 Not getting what is causing  this. Please can anyone help me to know on this. I really appreciate that. Thanks, PNV
Hi @SN1  Did you see the responses to your previous post about this on Friday?  https://community.splunk.com/t5/Splunk-Search/Error/m-p/712793 Did any of these solutions work for you? It sounds l... See more...
Hi @SN1  Did you see the responses to your previous post about this on Friday?  https://community.splunk.com/t5/Splunk-Search/Error/m-p/712793 Did any of these solutions work for you? It sounds like your license has expired or not installed correctly. Go to https://yourSplunkInstance/en-US/manager/system/licensing and check that the license is showing as valid. Are you expecting your instance to be connected to a License Server, or does it have its own license installed? Can you reach the license server (if applicable) from the problematic server using netcat? (nc -vz -v1 <serverIP> 8089)  Once you have confirmed let us know and we can look at a tweaked method for further investigation! If your license isnt showing here, or if it cannot connect to your license server (if applicable) then you will need to resolve this before being able to search non-internal indexes.  
Hi @tt-nexteng  How are you loading your inputs.conf into the Docker image? Are you adding directly into the container once it has started up? Splunk Ansible runs each time the container starts, the... See more...
Hi @tt-nexteng  How are you loading your inputs.conf into the Docker image? Are you adding directly into the container once it has started up? Splunk Ansible runs each time the container starts, therefore the container is fairly idempotent and will apply the configuration defined in default.yml / docker-compose ENV variables when started. Check out https://splunk.github.io/docker-splunk/ADVANCED.html for some configuration options you might want to look at to persist the inputs.conf - Specifically the section around enabling SSL as this has the config for inputs on port 9997 too! https://splunk.github.io/docker-splunk/ADVANCED.html#:~:text=distributed%2C%20containerized%20environment.-,Enable%20SSL%20Internal%20Communication,-To%20secure%20network   Sample default.yml snippet to configure Splunk TCP with SSL: splunk: ... s2s: ca: /mnt/certs/ca.pem cert: /mnt/certs/cert.pem enable: true password: abcd1234 port: 9997 ssl: true ... Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @santoshboorlaga  The error  AccessViolationException and ExecutionEngineException suggest problems with the native profiler component. Check Environment Variables - Update your launchSettings.... See more...
Hi @santoshboorlaga  The error  AccessViolationException and ExecutionEngineException suggest problems with the native profiler component. Check Environment Variables - Update your launchSettings.json to ensure all paths and versions match. Ensure these NuGet packages are installed with matching versions Modify Your Controller for Better Tracing:   using Microsoft.AspNetCore.Mvc; using System.Diagnostics; using System.Net.Http; using System.Diagnostics.Metrics; using System.Collections.Generic; namespace WebApplication3.Controllers { [ApiController] [Route("api/[controller]")] public class SampleController : ControllerBase { private readonly IHttpClientFactory _httpClientFactory; private static readonly ActivitySource _activitySource = new ActivitySource("WebApplication3.Controllers"); public SampleController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } [HttpGet("execute")] public async Task<IActionResult> Execute() { // Create a parent activity for the entire request using var requestActivity = _activitySource.StartActivity("Execute", ActivityKind.Server); requestActivity?.SetTag("http.method", "GET"); requestActivity?.SetTag("http.path", "/api/sample/execute"); await DoSomething(); // Get the current trace ID var traceId = Activity.Current?.TraceId.ToString() ?? requestActivity?.TraceId.ToString(); requestActivity?.SetTag("request.success", true); return Ok(new { message = "Request executed", traceId }); } [NonAction] public async Task DoSomething() { // This method should be instrumented by auto-instrumentation // But we can also add explicit instrumentation for redundancy using var activity = _activitySource.StartActivity( "DoSomething", ActivityKind.Internal, parentContext: Activity.Current?.Context ?? default); activity?.SetTag("operation.type", "data_fetch"); try { // Internal HTTP call to another endpoint var client = _httpClientFactory.CreateClient(); var response = await client.GetAsync("https://jsonplaceholder.typicode.com/todos/1"); response.EnsureSuccessStatusCode(); var content = await response.Content.ReadAsStringAsync(); activity?.SetTag("operation.success", true); Console.WriteLine($"TraceId: {activity?.TraceId}"); Console.WriteLine(content); } catch (Exception ex) { activity?.SetTag("operation.success", false); activity?.SetTag("error.message", ex.Message); activity?.SetStatus(ActivityStatusCode.Error); throw; } } } }   Add Program.cs Configuration - You need to manually add OpenTelemetry services to your Program.cs file for better control and to ensure proper integration: Regarding Your CORECLR_ENABLE_PROFILING Issue, you mentioned that when you set "CORECLR_ENABLE_PROFILING": "0", you see trace IDs in the console but not in Splunk APM. This makes sense because: Setting this to 0 disables the profiler-based auto-instrumentation Your explicit instrumentation still works (via Activity objects) But without the profiler, the method-level tracing won't work The solution is to keep CORECLR_ENABLE_PROFILING set to 1 but fix the underlying issues: Make sure all paths in environment variables are correct Check OpenTelemetry configuration as shown above For production environments, using an OpenTelemetry Collector is recommended: It acts as a buffer between your application and Splunk Provides retry mechanisms for temporary connection issues Can batch telemetry data for more efficient transmission Reduces the overhead on your application For development, you can continue using direct export to Splunk as you're doing now, but for production, I'd recommend setting up a collector. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
It seems that there are so many macros etc. that we cannot say directly almost anything about it.  The only thing what I can said is that you should try to resolve it by go step by step forward and ... See more...
It seems that there are so many macros etc. that we cannot say directly almost anything about it.  The only thing what I can said is that you should try to resolve it by go step by step forward and try to find why latest_time haven’t have value defined. This app https://classic.splunkbase.splunk.com/app/1603/ can help you to identify what values you have defined in your code. Just add script=… in your dashboard and this shows values to you. See e.g. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf page 4.
@wj742  First, ensure the UF is actually running on the Windows server: Open the Services panel (services.msc) and look for "SplunkForwarder" Confirm it’s running. If it’s stopped, start it. If ... See more...
@wj742  First, ensure the UF is actually running on the Windows server: Open the Services panel (services.msc) and look for "SplunkForwarder" Confirm it’s running. If it’s stopped, start it. If it’s running, restart it to rule out a temporary glitch (right-click > Restart). Check UF Logs for Errors The UF logs can reveal why data isn’t being forwarded. On the Windows server: Navigate to C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log Open splunkd.log in a text editor and look for: ERROR or WARN messages, especially around the time you restarted the service or when forwarding should have occurred. Key phrases like TcpOutputProc (indicates connection issues to the indexer) or FileInputTracker (indicates issues reading monitored files). Common issues to spot: Connect to <indexer_IP>:<port> failed" – suggests a network or indexer problem. "Paused the data flow" – indicates a forwarding block, often due to indexer issues Validate Forwarding Configuration Even if you think the configuration is fine, let’s double-check the UF’s outputs.conf: Location: C:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf (or in an app directory like etc\apps\<app_name>\local\ if managed by a deployment server). Example of a correct configuration: [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = <indexer_IP>:9997 Things to verify: The server line points to the correct indexer IP and port (typically 9997). No typos in the IP or port. disabled = false (or omitted, as false is default). If changes are made, restart the UF: C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe restart. Confirm Network Connectivity Since you’ve said ports are open, let’s test connectivity explicitly: From the Windows server, open a Command Prompt and run: telnet <indexer_IP> 9997 (replace with your indexer’s IP and port). If it connects (blank screen), the connection is good. If it fails ("Connect failed"), there’s a network issue despite open ports. Alternative: Use PowerShell: Test-NetConnection -ComputerName <indexer_IP> -Port 9997 If it fails: Double-check the firewall on the Windows server (outbound TCP 9997). Check the indexer’s firewall (inbound TCP 9997). Confirm with your network team that no intermediate devices (e.g., proxies, NATs) are blocking traffic. Verify Indexer Receiving Configuration The indexer must be configured to receive data. Ensure it’s set to listen on the expected port (e.g., 9997). Validate Inputs Configuration The UF needs to know what data to forward. Check inputs.conf: Location: C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf (or an app directory). Example for Windows Event Logs: [WinEventLog://Application] disabled = false index = my_index
@wj742  Check Splunk Process: Ensure the Splunk Universal Forwarder (UF) process is running. Verify Configuration Files: Ensure inputs.conf and outputs.conf are correctly configured. Here are samp... See more...
@wj742  Check Splunk Process: Ensure the Splunk Universal Forwarder (UF) process is running. Verify Configuration Files: Ensure inputs.conf and outputs.conf are correctly configured. Here are sample configurations: # inputs.conf [monitor://C:\path\to\logs] disabled = false sourcetype = my_sourcetype # outputs.conf [tcpout:my_indexer] server = indexer_hostname:9997 Ensure the Splunk user has the necessary permissions to read the log files being monitored.
@wj742  Check the splunkd.log for more detailed information.  
We have a UF installed on one of the windows servers, all the configurations seem fine, and the ports are also opened still the server is not forwarding the data to Splunk.
@SN1 Check this https://community.splunk.com/t5/Splunk-Search/Receiving-quot-DISABLED-DUE-TO-GRACE-PERIOD-quot/td-p/82802 
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code... See more...
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code, but we’re facing runtime issues. Despite trying various solutions, including reinstalling the `splunk.opentelemetry.autoinstrumentation` package, the issues persist. Could you please help us resolve these and suggest any necessary modifications? Do we need to add any collector? Another issue : When we set "CORECLR_ENABLE_PROFILING": "0" then we are able to see traceids in console but unable to see the traceid in splunk APM window. Error 1:   System.ExecutionEngineException HResult=0x80131506 Source=<Cannot evaluate the exception source> StackTrace: <Cannot evaluate the exception stack trace> at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.NativeMethods.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[]) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at DynamicClass.InvokeStub_Instrumentation.Initialize(System.Object, System.Object, IntPtr*) at System.Reflection.MethodBaseInvoker.InvokeWithNoArgs(System.Object, System.Reflection.BindingFlags) at System.Reflection.RuntimeMethodInfo.Invoke(System.Object, System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.MethodBase.Invoke(System.Object, System.Object[]) at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.RuntimeType.CreateInstanceImpl(System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks(System.String) Error 2: Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks() launchSettings.json: { "$schema": "https://json.schemastore.org/launchsettings.json", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:47665", "sslPort": 44339 } }, "profiles": { "WebApplication3": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "launchUrl": "swagger", "applicationUrl": "https://localhost:7146;http://localhost:5293", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development", "OTEL_SERVICE_NAME": "MyDotNet6WebApi", "OTEL_EXPORTER_OTLP_ENDPOINT": "https://ingest.XX.signalfx.com/v2/trace/otlp", "OTEL_EXPORTER_OTLP_HEADERS": "X-SF-Token=fdsfsdfsd-M-fdsfsdfsd-r", "OTEL_DOTNET_AUTO_ENABLED": "true", "OTEL_DOTNET_TRACES_METHODS_INCLUDE": "WebApplication3.Controllers.SampleController.DoSomething", "OTEL_DOTNET_AUTO_TRACES_ADDITIONAL_SOURCES": "Microsoft.AspNetCore.Http,System.Net.Http", "OTEL_TRACES_EXPORTER": "otlp,console", "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf", "OTEL_DOTNET_AUTO_INSTRUMENTATION_LOGS": "true", "OTEL_DOTNET_AUTO_INSTRUMENTATION_ENABLED": "true", "CORECLR_ENABLE_PROFILING": "1", "OTEL_DOTNET_AUTO_LOG_DIRECTORY": "C:\\temp\\otel-logs", "CORECLR_PROFILER": "{918728DD-259F-4A6A-AC2C-4F76DA9F3EAB}", "DOTNET_STARTUP_HOOKS": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.startuphook\\1.10.0\\lib\\netcoreapp3.1\\OpenTelemetry.AutoInstrumentation.StartupHook.dll", "OTEL_DOTNET_AUTO_HOME": "%USERPROFILE%\\.nuget\\packages\\splunk.opentelemetry.autoinstrumentation\\1.9.0", "CORECLR_PROFILER_PATH_64": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x64\\native\\OpenTelemetry.AutoInstrumentation.Native.dll", "CORECLR_PROFILER_PATH_32": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x86\\native\\OpenTelemetry.AutoInstrumentation.Native.dll" } } } } SampleController.cs: using Microsoft.AspNetCore.Mvc; using System.Diagnostics; using System.Net.Http; namespace WebApplication3.Controllers { [ApiController] [Route("api/[controller]")] public class SampleController : ControllerBase { private readonly IHttpClientFactory _httpClientFactory; public SampleController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } [HttpGet("execute")] public async Task<IActionResult> Execute() { await DoSomething(); var traceId = Activity.Current?.TraceId.ToString(); return Ok(new { message = "Request executed", traceId }); } [NonAction] public async Task DoSomething() { using var activity = new Activity("SampleController.DoSomething").Start(); // Internal HTTP call to another endpoint var response = await _httpClientFactory.CreateClient().GetAsync("https://jsonplaceholder.typicode.com/todos/1"); var content = await response.Content.ReadAsStringAsync(); Console.WriteLine(content); } } }   rapid response for Splunk Splunk Add-On for OpenTelemetry Collector 
@SN1  From the indexer, check network connectivity to the license master   ping <license-master-ip> telnet <license-master-ip> 8089   If it fails, there’s a network issue (firewall, DNS, etc.)... See more...
@SN1  From the indexer, check network connectivity to the license master   ping <license-master-ip> telnet <license-master-ip> 8089   If it fails, there’s a network issue (firewall, DNS, etc.).   Check DNS resolution if you’re using a hostname for the license master:   nslookup <license-master-hostname>   If it doesn’t resolve, DNS might be the problem.        
@SN1  This error typically indicates that the indexer is unable to communicate with the license master or there’s a licensing issue affecting its operation.   You can run this query on the License... See more...
@SN1  This error typically indicates that the indexer is unable to communicate with the license master or there’s a licensing issue affecting its operation.   You can run this query on the License Master servers to find host name/IP of the Indexer (license slaves) connecting to your License master.   | rest /services/licenser/slaves splunk_server=local | table title label | rename title as GUID label as Indexer index=_internal component=Metrics group=tcpin_connections [| rest /services/licenser/slaves splunk_server=local | table title | rename title as guid ] | dedup sourceHost sourceIp | table sourceHost sourceIp hostname guid version os
@muhammadfahimma  Please review the following, and I kindly request you to raise a Splunk support ticket. Investigate findings using drilldown searches and dashboards in Splunk Enterprise Security ... See more...
@muhammadfahimma  Please review the following, and I kindly request you to raise a Splunk support ticket. Investigate findings using drilldown searches and dashboards in Splunk Enterprise Security - Splunk Documentation
@Nrsch    By default, Key Indicator Searches like “Access - Total Access Attempts,” “Malware - Total Infection Count,” and “Risk - Median Risk Score By Other” do not directly change the “Aggreg... See more...
@Nrsch    By default, Key Indicator Searches like “Access - Total Access Attempts,” “Malware - Total Infection Count,” and “Risk - Median Risk Score By Other” do not directly change the “Aggregated User Risk” value on the Risk Analysis dashboard. They are designed to display metrics, not update risk scores. However, if they feed into correlation searches that assign risk scores, they could have an indirect effect.   https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Customizing_Enterprise_Security_dashboards_to_improve_security_monitoring     
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check wheth... See more...
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check whether my indexer is connected with license master or not. 2.  if NOT then how can i connect them again. 3. And if the connection is good from start then what do I do next?
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get rese... See more...
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get reset.     version: "3.6" networks: splunknet: driver: bridge attachable: true volumes: splunk-var: external: true splunk-etc: external: true services: splunk: networks: splunknet: aliases: - splunk image: xxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/splunk/splunk:latest container_name: splunk restart: always environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=password ports: - "80:8000" - "9997:9997" volumes: - splunk-var:/opt/splunk/var - splunk-etc:/opt/splunk/etc       The following is my conf.     [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = password requireClientCert = false          
Thank you for reply, it’s very useful. I can explain more my question : I have some “Key Indicator Search” like “Access - Total Access Attempts” , “Malware - Total Infection Count” , “Risk - Median ... See more...
Thank you for reply, it’s very useful. I can explain more my question : I have some “Key Indicator Search” like “Access - Total Access Attempts” , “Malware - Total Infection Count” , “Risk - Median Risk Score By Other” , you said if they trigger I can see their related notable event in “Incident Review” . It’s OK, But my main question is: Dose this searches have any effects on any a value in some dashboard in ES? For example may be they change the value of the “aggregated user risk” in “ES -> Security Intelligence -> Risk Analysis -> aggregated user risk” . Thank you very much for your reply
  `search_on_index_time("`$input_macro$`", $span$)` | fields _time source id | bin _time AS earliest_time span=$span$ | eval latest_time=earliest_time+$span$ | stats values(id) AS ids, values(source... See more...
  `search_on_index_time("`$input_macro$`", $span$)` | fields _time source id | bin _time AS earliest_time span=$span$ | eval latest_time=earliest_time+$span$ | stats values(id) AS ids, values(source) AS sources BY earliest_time latest_time | eval ids="\"".mvjoin(ids, "\",\"")."\"", sources="\"".mvjoin(sources, "\",\"")."\"" | `fillnull(value="", fields="earliest_time latest_time input_macro summarize_macro sources ids")` | map maxsearches=20000 search="search earliest=$earliest_time$ latest=$latest_time$ `$input_macro$(\"$sources$\",\"$ids$\")` | `$summarize_macro$($earliest_time$, $latest_time$)` | eval _time=$earliest_time$" | appendpipe [|where source="route" | collect index=$index$ source="route" | where false()] | appendpipe [|where source="system" | collect index=$index$ source="system" | where false()]   I am using a macro in one of my saved searches and encountering the below error in Splunk. Based on the known issue, what changes should I make to the macro to resolve this error and eliminate the message? ERROR TimeParser [24352 SchedulerThread] - Invalid value "$latest_time$" for time term 'latest' @isoutamo @livehybrid