All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host ... See more...
Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host name during the search but i have noticed lot of junk values reporting to the SH. As a part of troubleshooting, i have verified the raw logs in the UF and its not breaking and some how the logs are breaking in between the UF to indexer. Can you please assist me on this issue.
@livehybrid  : Thanks for the response.  The time frame is dynamic from time picker in the dashboard. I tried for last 60 mins and expanded time range as well. In all of the cases there is discrep... See more...
@livehybrid  : Thanks for the response.  The time frame is dynamic from time picker in the dashboard. I tried for last 60 mins and expanded time range as well. In all of the cases there is discrepancy. 
Hi @muhammadfahimma  I believe you may be experiencing a bug (BLUERIDGE-13575) which is a known issue with ES 8.0.2 (See https://docs.splunk.com/Documentation/ES/8.0.2/RN/KnownIssues) If this is th... See more...
Hi @muhammadfahimma  I believe you may be experiencing a bug (BLUERIDGE-13575) which is a known issue with ES 8.0.2 (See https://docs.splunk.com/Documentation/ES/8.0.2/RN/KnownIssues) If this is the issue then you may find the following workaround solves the issue until fixed in the product: Workaround: Remove `source` before sending to detection. add `| fields - source` to end of search Either way, I would suggest raising a support case, as even if it is this particular bug they will be able to associate it to your account and keep you updated with progress and resolution. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @shabamichae , as also @isoutamo said: do only the requested things not anything else! Ciao. Giuseppe
Hi @Poojitha , two things: at first put all the search terms in the main search to have a more performant search: index="*test" sourcetype=aws:test host=testhost lvl IN (Error, Warn) source="*test... See more...
Hi @Poojitha , two things: at first put all the search terms in the main search to have a more performant search: index="*test" sourcetype=aws:test host=testhost lvl IN (Error, Warn) source="*testsource*" | stats count BY lvl | sort -count second thing: to compare two searches you have to use a defined time frame and never latest=now because in the meantime you could have new events, so run your search in a past timeframe (e.g. like @livehybrid hinted) or previous hour. Ciao. Giuseppe
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating dat... See more...
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating data from 2 indexes and both indexes have same fields of interests ( user, src_ip)  Base query:   index=okta or index=network | iplocation (src_ip) |stats values(src_ip) values(deviceName) values(City) values(Country) by user, index   Results: We get something like this user index src_ip DeviceName Country John_smith okta 10.0.0.1 192.178.2.24 laptop01 USA John_smith network 198.20.0.14 64.214.71.89 64.214.71.90 71.29.100.90 laptop01 laptop02 server01 My-CloudPC USA           Expected results: How to map which src_ip is coming from which Devicename?  We want to align the Devicename  in same sequence as per the src_ip ? If i use list instead of values in my stats,  it shows duplicates like this for src_ip and deviceName. Even doing a |dedup src_ip is not helping    Hope clear.
Hi @Poojitha  Can you confirm that you are running the search across the exact time frame? e.g. "Yesterday" If you run something like "Last 24 hours" then the actual timeframe will be different ea... See more...
Hi @Poojitha  Can you confirm that you are running the search across the exact time frame? e.g. "Yesterday" If you run something like "Last 24 hours" then the actual timeframe will be different each time you run it, this would explain why your values are slightly different when running the search versus viewing the dashboard. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = ol... See more...
1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 300 index = wineventlog renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 300 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist3 = EventCode="5447" index = wineventlog renderXml=false
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | s... See more...
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | search lvl IN (Error, Warn) source="*testsource*" | chart count BY lvl| sort -count When I run the query it is showing result  lvl == warn = 304, error=5 . But in pie chart it is showing different count ->warn=325, error=7 Not getting what is causing  this. Please can anyone help me to know on this. I really appreciate that. Thanks, PNV
Hi @SN1  Did you see the responses to your previous post about this on Friday?  https://community.splunk.com/t5/Splunk-Search/Error/m-p/712793 Did any of these solutions work for you? It sounds l... See more...
Hi @SN1  Did you see the responses to your previous post about this on Friday?  https://community.splunk.com/t5/Splunk-Search/Error/m-p/712793 Did any of these solutions work for you? It sounds like your license has expired or not installed correctly. Go to https://yourSplunkInstance/en-US/manager/system/licensing and check that the license is showing as valid. Are you expecting your instance to be connected to a License Server, or does it have its own license installed? Can you reach the license server (if applicable) from the problematic server using netcat? (nc -vz -v1 <serverIP> 8089)  Once you have confirmed let us know and we can look at a tweaked method for further investigation! If your license isnt showing here, or if it cannot connect to your license server (if applicable) then you will need to resolve this before being able to search non-internal indexes.  
Hi @tt-nexteng  How are you loading your inputs.conf into the Docker image? Are you adding directly into the container once it has started up? Splunk Ansible runs each time the container starts, the... See more...
Hi @tt-nexteng  How are you loading your inputs.conf into the Docker image? Are you adding directly into the container once it has started up? Splunk Ansible runs each time the container starts, therefore the container is fairly idempotent and will apply the configuration defined in default.yml / docker-compose ENV variables when started. Check out https://splunk.github.io/docker-splunk/ADVANCED.html for some configuration options you might want to look at to persist the inputs.conf - Specifically the section around enabling SSL as this has the config for inputs on port 9997 too! https://splunk.github.io/docker-splunk/ADVANCED.html#:~:text=distributed%2C%20containerized%20environment.-,Enable%20SSL%20Internal%20Communication,-To%20secure%20network   Sample default.yml snippet to configure Splunk TCP with SSL: splunk: ... s2s: ca: /mnt/certs/ca.pem cert: /mnt/certs/cert.pem enable: true password: abcd1234 port: 9997 ssl: true ... Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @santoshboorlaga  The error  AccessViolationException and ExecutionEngineException suggest problems with the native profiler component. Check Environment Variables - Update your launchSettings.... See more...
Hi @santoshboorlaga  The error  AccessViolationException and ExecutionEngineException suggest problems with the native profiler component. Check Environment Variables - Update your launchSettings.json to ensure all paths and versions match. Ensure these NuGet packages are installed with matching versions Modify Your Controller for Better Tracing:   using Microsoft.AspNetCore.Mvc; using System.Diagnostics; using System.Net.Http; using System.Diagnostics.Metrics; using System.Collections.Generic; namespace WebApplication3.Controllers { [ApiController] [Route("api/[controller]")] public class SampleController : ControllerBase { private readonly IHttpClientFactory _httpClientFactory; private static readonly ActivitySource _activitySource = new ActivitySource("WebApplication3.Controllers"); public SampleController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } [HttpGet("execute")] public async Task<IActionResult> Execute() { // Create a parent activity for the entire request using var requestActivity = _activitySource.StartActivity("Execute", ActivityKind.Server); requestActivity?.SetTag("http.method", "GET"); requestActivity?.SetTag("http.path", "/api/sample/execute"); await DoSomething(); // Get the current trace ID var traceId = Activity.Current?.TraceId.ToString() ?? requestActivity?.TraceId.ToString(); requestActivity?.SetTag("request.success", true); return Ok(new { message = "Request executed", traceId }); } [NonAction] public async Task DoSomething() { // This method should be instrumented by auto-instrumentation // But we can also add explicit instrumentation for redundancy using var activity = _activitySource.StartActivity( "DoSomething", ActivityKind.Internal, parentContext: Activity.Current?.Context ?? default); activity?.SetTag("operation.type", "data_fetch"); try { // Internal HTTP call to another endpoint var client = _httpClientFactory.CreateClient(); var response = await client.GetAsync("https://jsonplaceholder.typicode.com/todos/1"); response.EnsureSuccessStatusCode(); var content = await response.Content.ReadAsStringAsync(); activity?.SetTag("operation.success", true); Console.WriteLine($"TraceId: {activity?.TraceId}"); Console.WriteLine(content); } catch (Exception ex) { activity?.SetTag("operation.success", false); activity?.SetTag("error.message", ex.Message); activity?.SetStatus(ActivityStatusCode.Error); throw; } } } }   Add Program.cs Configuration - You need to manually add OpenTelemetry services to your Program.cs file for better control and to ensure proper integration: Regarding Your CORECLR_ENABLE_PROFILING Issue, you mentioned that when you set "CORECLR_ENABLE_PROFILING": "0", you see trace IDs in the console but not in Splunk APM. This makes sense because: Setting this to 0 disables the profiler-based auto-instrumentation Your explicit instrumentation still works (via Activity objects) But without the profiler, the method-level tracing won't work The solution is to keep CORECLR_ENABLE_PROFILING set to 1 but fix the underlying issues: Make sure all paths in environment variables are correct Check OpenTelemetry configuration as shown above For production environments, using an OpenTelemetry Collector is recommended: It acts as a buffer between your application and Splunk Provides retry mechanisms for temporary connection issues Can batch telemetry data for more efficient transmission Reduces the overhead on your application For development, you can continue using direct export to Splunk as you're doing now, but for production, I'd recommend setting up a collector. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
It seems that there are so many macros etc. that we cannot say directly almost anything about it.  The only thing what I can said is that you should try to resolve it by go step by step forward and ... See more...
It seems that there are so many macros etc. that we cannot say directly almost anything about it.  The only thing what I can said is that you should try to resolve it by go step by step forward and try to find why latest_time haven’t have value defined. This app https://classic.splunkbase.splunk.com/app/1603/ can help you to identify what values you have defined in your code. Just add script=… in your dashboard and this shows values to you. See e.g. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf page 4.
@wj742  First, ensure the UF is actually running on the Windows server: Open the Services panel (services.msc) and look for "SplunkForwarder" Confirm it’s running. If it’s stopped, start it. If ... See more...
@wj742  First, ensure the UF is actually running on the Windows server: Open the Services panel (services.msc) and look for "SplunkForwarder" Confirm it’s running. If it’s stopped, start it. If it’s running, restart it to rule out a temporary glitch (right-click > Restart). Check UF Logs for Errors The UF logs can reveal why data isn’t being forwarded. On the Windows server: Navigate to C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log Open splunkd.log in a text editor and look for: ERROR or WARN messages, especially around the time you restarted the service or when forwarding should have occurred. Key phrases like TcpOutputProc (indicates connection issues to the indexer) or FileInputTracker (indicates issues reading monitored files). Common issues to spot: Connect to <indexer_IP>:<port> failed" – suggests a network or indexer problem. "Paused the data flow" – indicates a forwarding block, often due to indexer issues Validate Forwarding Configuration Even if you think the configuration is fine, let’s double-check the UF’s outputs.conf: Location: C:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf (or in an app directory like etc\apps\<app_name>\local\ if managed by a deployment server). Example of a correct configuration: [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = <indexer_IP>:9997 Things to verify: The server line points to the correct indexer IP and port (typically 9997). No typos in the IP or port. disabled = false (or omitted, as false is default). If changes are made, restart the UF: C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe restart. Confirm Network Connectivity Since you’ve said ports are open, let’s test connectivity explicitly: From the Windows server, open a Command Prompt and run: telnet <indexer_IP> 9997 (replace with your indexer’s IP and port). If it connects (blank screen), the connection is good. If it fails ("Connect failed"), there’s a network issue despite open ports. Alternative: Use PowerShell: Test-NetConnection -ComputerName <indexer_IP> -Port 9997 If it fails: Double-check the firewall on the Windows server (outbound TCP 9997). Check the indexer’s firewall (inbound TCP 9997). Confirm with your network team that no intermediate devices (e.g., proxies, NATs) are blocking traffic. Verify Indexer Receiving Configuration The indexer must be configured to receive data. Ensure it’s set to listen on the expected port (e.g., 9997). Validate Inputs Configuration The UF needs to know what data to forward. Check inputs.conf: Location: C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf (or an app directory). Example for Windows Event Logs: [WinEventLog://Application] disabled = false index = my_index
@wj742  Check Splunk Process: Ensure the Splunk Universal Forwarder (UF) process is running. Verify Configuration Files: Ensure inputs.conf and outputs.conf are correctly configured. Here are samp... See more...
@wj742  Check Splunk Process: Ensure the Splunk Universal Forwarder (UF) process is running. Verify Configuration Files: Ensure inputs.conf and outputs.conf are correctly configured. Here are sample configurations: # inputs.conf [monitor://C:\path\to\logs] disabled = false sourcetype = my_sourcetype # outputs.conf [tcpout:my_indexer] server = indexer_hostname:9997 Ensure the Splunk user has the necessary permissions to read the log files being monitored.
@wj742  Check the splunkd.log for more detailed information.  
We have a UF installed on one of the windows servers, all the configurations seem fine, and the ports are also opened still the server is not forwarding the data to Splunk.
@SN1 Check this https://community.splunk.com/t5/Splunk-Search/Receiving-quot-DISABLED-DUE-TO-GRACE-PERIOD-quot/td-p/82802 
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code... See more...
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code, but we’re facing runtime issues. Despite trying various solutions, including reinstalling the `splunk.opentelemetry.autoinstrumentation` package, the issues persist. Could you please help us resolve these and suggest any necessary modifications? Do we need to add any collector? Another issue : When we set "CORECLR_ENABLE_PROFILING": "0" then we are able to see traceids in console but unable to see the traceid in splunk APM window. Error 1:   System.ExecutionEngineException HResult=0x80131506 Source=<Cannot evaluate the exception source> StackTrace: <Cannot evaluate the exception stack trace> at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.NativeMethods.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[]) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at DynamicClass.InvokeStub_Instrumentation.Initialize(System.Object, System.Object, IntPtr*) at System.Reflection.MethodBaseInvoker.InvokeWithNoArgs(System.Object, System.Reflection.BindingFlags) at System.Reflection.RuntimeMethodInfo.Invoke(System.Object, System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.MethodBase.Invoke(System.Object, System.Object[]) at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.RuntimeType.CreateInstanceImpl(System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks(System.String) Error 2: Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks() launchSettings.json: { "$schema": "https://json.schemastore.org/launchsettings.json", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:47665", "sslPort": 44339 } }, "profiles": { "WebApplication3": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "launchUrl": "swagger", "applicationUrl": "https://localhost:7146;http://localhost:5293", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development", "OTEL_SERVICE_NAME": "MyDotNet6WebApi", "OTEL_EXPORTER_OTLP_ENDPOINT": "https://ingest.XX.signalfx.com/v2/trace/otlp", "OTEL_EXPORTER_OTLP_HEADERS": "X-SF-Token=fdsfsdfsd-M-fdsfsdfsd-r", "OTEL_DOTNET_AUTO_ENABLED": "true", "OTEL_DOTNET_TRACES_METHODS_INCLUDE": "WebApplication3.Controllers.SampleController.DoSomething", "OTEL_DOTNET_AUTO_TRACES_ADDITIONAL_SOURCES": "Microsoft.AspNetCore.Http,System.Net.Http", "OTEL_TRACES_EXPORTER": "otlp,console", "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf", "OTEL_DOTNET_AUTO_INSTRUMENTATION_LOGS": "true", "OTEL_DOTNET_AUTO_INSTRUMENTATION_ENABLED": "true", "CORECLR_ENABLE_PROFILING": "1", "OTEL_DOTNET_AUTO_LOG_DIRECTORY": "C:\\temp\\otel-logs", "CORECLR_PROFILER": "{918728DD-259F-4A6A-AC2C-4F76DA9F3EAB}", "DOTNET_STARTUP_HOOKS": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.startuphook\\1.10.0\\lib\\netcoreapp3.1\\OpenTelemetry.AutoInstrumentation.StartupHook.dll", "OTEL_DOTNET_AUTO_HOME": "%USERPROFILE%\\.nuget\\packages\\splunk.opentelemetry.autoinstrumentation\\1.9.0", "CORECLR_PROFILER_PATH_64": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x64\\native\\OpenTelemetry.AutoInstrumentation.Native.dll", "CORECLR_PROFILER_PATH_32": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x86\\native\\OpenTelemetry.AutoInstrumentation.Native.dll" } } } } SampleController.cs: using Microsoft.AspNetCore.Mvc; using System.Diagnostics; using System.Net.Http; namespace WebApplication3.Controllers { [ApiController] [Route("api/[controller]")] public class SampleController : ControllerBase { private readonly IHttpClientFactory _httpClientFactory; public SampleController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } [HttpGet("execute")] public async Task<IActionResult> Execute() { await DoSomething(); var traceId = Activity.Current?.TraceId.ToString(); return Ok(new { message = "Request executed", traceId }); } [NonAction] public async Task DoSomething() { using var activity = new Activity("SampleController.DoSomething").Start(); // Internal HTTP call to another endpoint var response = await _httpClientFactory.CreateClient().GetAsync("https://jsonplaceholder.typicode.com/todos/1"); var content = await response.Content.ReadAsStringAsync(); Console.WriteLine(content); } } }   rapid response for Splunk Splunk Add-On for OpenTelemetry Collector 
@SN1  From the indexer, check network connectivity to the license master   ping <license-master-ip> telnet <license-master-ip> 8089   If it fails, there’s a network issue (firewall, DNS, etc.)... See more...
@SN1  From the indexer, check network connectivity to the license master   ping <license-master-ip> telnet <license-master-ip> 8089   If it fails, there’s a network issue (firewall, DNS, etc.).   Check DNS resolution if you’re using a hostname for the license master:   nslookup <license-master-hostname>   If it doesn’t resolve, DNS might be the problem.