All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have this search to see logins to our splunk environment:   index = _audit user="*" action="login attempt" info=succeeded | stats count by user mgmt is asking to see the same data but instead of ... See more...
I have this search to see logins to our splunk environment:   index = _audit user="*" action="login attempt" info=succeeded | stats count by user mgmt is asking to see the same data but instead of a "count" column, they want a column for each month. I assume it will be a table of some sort but can't figure out the date summarizing. Here is an example of the individual entry: Audit:[timestamp=03-03-2025 09:10:52.577, user=xxxxxx, action=login attempt, info=succeeded reason=user-initiated useragent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36" clientip=xxx.xxx.xxx.x" method=LDAP" session=17a169464fada764a1bac7310cac4c47] columns should be:  user   monthA    monthB   monthc with the counts under each month Thanks!
Hi team,  I am unable to send logs to server by using "splunk add monitor <filename>" command with forwarder version 9.4.0 Splunk is running as root user. add monitor command is asking for credenti... See more...
Hi team,  I am unable to send logs to server by using "splunk add monitor <filename>" command with forwarder version 9.4.0 Splunk is running as root user. add monitor command is asking for credentials. And the inputs.conf file is not getting updated with the log file name that is added to monitor. sudo splunk add monitor Test.log Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R root:root /opt/splunkforwarder" Splunk username: Password: Login failed Tested with forwarder version 9.0.0 and it worked. That time also it asked for credentials but inputs.conf got updated and logs sent to server without providing the credentials. I want to send logs to server using forwarder 9.4.0 What changes should I do to make it work. Please suggest...
IHAC with an SVA C3 (On-Prem) setup running 9.4.0 on the MN, SHC, Deployer but 9.3.2 on the peers (upgrade in the works due to unsupported linux kernel 3.x). They've been running this way OK for abou... See more...
IHAC with an SVA C3 (On-Prem) setup running 9.4.0 on the MN, SHC, Deployer but 9.3.2 on the peers (upgrade in the works due to unsupported linux kernel 3.x). They've been running this way OK for about a month whilst the upgrade is pending. Start of issue The problem that is being seen is that the client wanted to disable the new 'audit_trail' app for platform confidentiality a week ago. They created a local folder for the app on the deployer ($SPLUNK_HOME/etc/shcluster/apps/audit_trail) and disabled it via a .conf file change, no issue worked ok and pushed to the SHC from the deployer. The SHC is all in sync. Symptom The issue now being seen is that they can't delete TA's and apps with pushes from the Deployer. For example they are removing legacy TA's and despite not being on the deployer they remain on the SHC. The cluster is operational and in sync OK and I have temporarily removed the 'audit_trail' workaround which allows the usual command to operate again: ./splunk apply shcluster-bundle -target <https://x.x.x.x:8089> -preserve-lookups true If not you have to include the switch (-push-default-apps true) Next steps: I'm trying to locate the correct component in index _internal to troubleshoot what is happening and why it is not deleting apps and TA's not on the Deployer Example: index="_internal" source="/opt/splunk/var/log/splunkd.log" host IN (SH, SH, SH, Deployer) I can't locate any warnings or relevant errors even when including the relevant TA being intended for removal on the short time period in question Any suggestions welcome        
Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 1... See more...
Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 17:01:01 - System Stop These are the logs. Lets say If I'm searchit it in a chart, for the timerange from 7Am - 4Pm the chart from 8Am until 10:54:01 Am is empty since the previous event was generated at 6:01:01, so there is a gap. I would like to fix this. In some cases only 2 values is been repeated, so we can take the one in present, the past can be its opposite. Eg -  At 10:54:01 - System Start, We have received this log, where the system is start, the previous one will be stop.  These are fixed for some cased, I need two best solutions, only for this scenario, other for multiple values, like these 14:01:01 - System Started 17:54:01 - System reset 22:09:04 - System Stop 23:01:01 - System Started 01:01:01 - System Stop wheres here I'm getting three values like Started, Stop and reset. Thanks in Advance!
Hello, I am trying to collect bash_history logs in real-time from multiple Linux hosts using Splunk. I have deployed the following script to append executed commands to /var/log/bash_history.log: #... See more...
Hello, I am trying to collect bash_history logs in real-time from multiple Linux hosts using Splunk. I have deployed the following script to append executed commands to /var/log/bash_history.log: #!/bin/bash LOG_FILE="/var/log/bash_history.log" PROMPT_COMMAND_STR='export PROMPT_COMMAND='\''RECORD_CMD=$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//"); echo "$(date "+%Y-%m-%d %H:%M:%S") $(whoami) $RECORD_CMD" >> /var/log/bash_history.log'\'''   # 1. Create log file if it doesn't exist and set permissions if [ ! -f "$LOG_FILE" ]; then     touch "$LOG_FILE"     echo "[INFO] Log file created: $LOG_FILE" fi chmod 666 "$LOG_FILE" chown root:users "$LOG_FILE" echo "[INFO] Log file permissions set"   # 2. Add PROMPT_COMMAND to /etc/bash.bashrc if ! grep -q "PROMPT_COMMAND" /etc/bash.bashrc; then     echo "$PROMPT_COMMAND_STR" >> /etc/bash.bashrc     echo "[INFO] PROMPT_COMMAND added to /etc/bash.bashrc" fi   # 3. Force loading of ~/.bashrc through /etc/profile if ! grep -q "source ~/.bashrc" /etc/profile; then     echo 'if [ -f ~/.bashrc ]; then source ~/.bashrc; fi' >> /etc/profile     echo "[INFO] ~/.bashrc now loads via /etc/profile" fi   # 4. Add PROMPT_COMMAND to all users' ~/.bashrc and ~/.profile for user in $(ls /home); do     for FILE in "/home/$user/.bashrc" "/home/$user/.profile"; do         if [ -f "$FILE" ] && ! grep -q "PROMPT_COMMAND" "$FILE"; then             echo "$PROMPT_COMMAND_STR" >> "$FILE"             echo "[INFO] PROMPT_COMMAND added to $FILE (user: $user)"         fi     done done   # 5. Add PROMPT_COMMAND for root user for FILE in "/root/.bashrc" "/root/.profile"; do     if [ -f "$FILE" ] && ! grep -q "PROMPT_COMMAND" "$FILE"; then         echo "$PROMPT_COMMAND_STR" >> "$FILE"         echo "[INFO] PROMPT_COMMAND added to $FILE (root)"     fi done   # 6. Ensure ~/.bashrc is sourced in ~/.profile for all users for user in $(ls /home); do     PROFILE_FILE="/home/$user/.profile"     if [ -f "$PROFILE_FILE" ] && ! grep -q ". ~/.bashrc" "$PROFILE_FILE"; then         echo ". ~/.bashrc" >> "$PROFILE_FILE"         echo "[INFO] ~/.bashrc now sources from ~/.profile (user: $user)"     fi done   # 7. Ensure all users use Bash shell while IFS=: read -r username _ _ _ _ home shell; do     if [[ "$home" == /home/* || "$home" == "/root" ]]; then         if [[ "$shell" != "/bin/bash" ]]; then             echo "[WARNING] User $username has shell $shell, changing to Bash..."             usermod --shell /bin/bash "$username"         fi     fi done < /etc/passwd   # 8. Apply changes exec bash echo "[INFO] Configuration applied" The script runs correctly, and /var/log/bash_history.log is created on all hosts. However, Splunk is not collecting logs from all hosts. Some hosts send data properly, while others do not. What I have checked: Permissions on /var/log/bash_history.log → The file is writable by all users (chmod 666 and chown root:users). Presence of PROMPT_COMMAND in user sessions → When running echo $PROMPT_COMMAND, it appears correctly for most users. SU behavior → If users switch with su - username, it works. However, if they switch with su username, sometimes the logs are missing. Splunk Inputs Configuration: [monitor:///var/log/bash_history.log] disabled = false index = os sourcetype = bash_history This is properly deployed to all hosts. Questions: Could there be permission issues with writing to /var/log/bash_history.log under certain circumstances? Would another directory (e.g., /tmp/) be better? How can I ensure that all user sessions (including su username) log commands consistently? Could there be an issue with Splunk Universal Forwarder not properly monitoring /var/log/bash_history.log on some hosts?   Any insights or best practices would be greatly appreciated! Thanks.
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> ... See more...
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged=“true”> <label>Refine further?</label> <prefix> | where </prefix> </input> <html id="htmlid"> <p>count: $job.resultCount$</p> </html> <event> <search> . . . . </search> </event> </panel> How can I make input and html text appear side by side and events at the bottom? My requirement is to have this in achieved in single panel in XML dashboard. Thanks for any help!
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to h... See more...
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to have EC+2:1, meaning the data on my cold storage will be replicated twice.  As a result, i would like to disable replication on my cold storage, but there is currently no way to do that in Splunk(or not that I know of). I am thinking of writing a cron job that deletes all replicated bucket in the cold storage disk. For this to happen, all of the indexers should be referring to a single shared file path in the cold storage. However, this begs the question: Will the search still works as per normal? Lets say the main bucket is on Indexer A  and the replicated copy is in indexer B. But my indexer A is currently under maintenance, would it be possible for lets say, index B, to query the bucket with indexer A's bid? Additionally, will indexer B sense that something is wrong and try to replicate the bucket in warm bucket again?
Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host ... See more...
Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host name during the search but i have noticed lot of junk values reporting to the SH. As a part of troubleshooting, i have verified the raw logs in the UF and its not breaking and some how the logs are breaking in between the UF to indexer. Can you please assist me on this issue.
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating dat... See more...
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating data from 2 indexes and both indexes have same fields of interests ( user, src_ip)  Base query:   index=okta or index=network | iplocation (src_ip) |stats values(src_ip) values(deviceName) values(City) values(Country) by user, index   Results: We get something like this user index src_ip DeviceName Country John_smith okta 10.0.0.1 192.178.2.24 laptop01 USA John_smith network 198.20.0.14 64.214.71.89 64.214.71.90 71.29.100.90 laptop01 laptop02 server01 My-CloudPC USA           Expected results: How to map which src_ip is coming from which Devicename?  We want to align the Devicename  in same sequence as per the src_ip ? If i use list instead of values in my stats,  it shows duplicates like this for src_ip and deviceName. Even doing a |dedup src_ip is not helping    Hope clear.
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | s... See more...
Hi All, I have a panel in the  classic dashboard that has pie chart visualisation. Below is the query :  index="*test" sourcetype=aws:test host=testhost |table lvl msg _time source host tnt | search lvl IN (Error, Warn) source="*testsource*" | chart count BY lvl| sort -count When I run the query it is showing result  lvl == warn = 304, error=5 . But in pie chart it is showing different count ->warn=325, error=7 Not getting what is causing  this. Please can anyone help me to know on this. I really appreciate that. Thanks, PNV
We have a UF installed on one of the windows servers, all the configurations seem fine, and the ports are also opened still the server is not forwarding the data to Splunk.
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code... See more...
We are trying to implement method-level tracing using the `splunk.opentelemetry.autoinstrumentation` package (version 1.9.0) in a .NET Core Web API application targeting .NET 9. We’ve set up the code, but we’re facing runtime issues. Despite trying various solutions, including reinstalling the `splunk.opentelemetry.autoinstrumentation` package, the issues persist. Could you please help us resolve these and suggest any necessary modifications? Do we need to add any collector? Another issue : When we set "CORECLR_ENABLE_PROFILING": "0" then we are able to see traceids in console but unable to see the traceid in splunk APM window. Error 1:   System.ExecutionEngineException HResult=0x80131506 Source=<Cannot evaluate the exception source> StackTrace: <Cannot evaluate the exception stack trace> at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.NativeMethods.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[]) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at DynamicClass.InvokeStub_Instrumentation.Initialize(System.Object, System.Object, IntPtr*) at System.Reflection.MethodBaseInvoker.InvokeWithNoArgs(System.Object, System.Reflection.BindingFlags) at System.Reflection.RuntimeMethodInfo.Invoke(System.Object, System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.MethodBase.Invoke(System.Object, System.Object[]) at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.RuntimeType.CreateInstanceImpl(System.Reflection.BindingFlags, System.Reflection.Binder, System.Object[], System.Globalization.CultureInfo) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks(System.String) Error 2: Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at OpenTelemetry.AutoInstrumentation.NativeMethods+Windows.AddInstrumentations(System.String, OpenTelemetry.AutoInstrumentation.NativeCallTargetDefinition[], Int32) at OpenTelemetry.AutoInstrumentation.Instrumentation.RegisterBytecodeInstrumentations(Payload) at OpenTelemetry.AutoInstrumentation.Instrumentation.Initialize() at OpenTelemetry.AutoInstrumentation.Loader.Loader.TryLoadManagedAssembly() at OpenTelemetry.AutoInstrumentation.Loader.Loader..cctor() at OpenTelemetry.AutoInstrumentation.Loader.Loader..ctor() at System.RuntimeType.CreateInstanceDefaultCtor(Boolean, Boolean) at System.Reflection.Assembly.CreateInstance(System.String) at StartupHook.Initialize() at System.StartupHookProvider.ProcessStartupHooks() launchSettings.json: { "$schema": "https://json.schemastore.org/launchsettings.json", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:47665", "sslPort": 44339 } }, "profiles": { "WebApplication3": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "launchUrl": "swagger", "applicationUrl": "https://localhost:7146;http://localhost:5293", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development", "OTEL_SERVICE_NAME": "MyDotNet6WebApi", "OTEL_EXPORTER_OTLP_ENDPOINT": "https://ingest.XX.signalfx.com/v2/trace/otlp", "OTEL_EXPORTER_OTLP_HEADERS": "X-SF-Token=fdsfsdfsd-M-fdsfsdfsd-r", "OTEL_DOTNET_AUTO_ENABLED": "true", "OTEL_DOTNET_TRACES_METHODS_INCLUDE": "WebApplication3.Controllers.SampleController.DoSomething", "OTEL_DOTNET_AUTO_TRACES_ADDITIONAL_SOURCES": "Microsoft.AspNetCore.Http,System.Net.Http", "OTEL_TRACES_EXPORTER": "otlp,console", "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf", "OTEL_DOTNET_AUTO_INSTRUMENTATION_LOGS": "true", "OTEL_DOTNET_AUTO_INSTRUMENTATION_ENABLED": "true", "CORECLR_ENABLE_PROFILING": "1", "OTEL_DOTNET_AUTO_LOG_DIRECTORY": "C:\\temp\\otel-logs", "CORECLR_PROFILER": "{918728DD-259F-4A6A-AC2C-4F76DA9F3EAB}", "DOTNET_STARTUP_HOOKS": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.startuphook\\1.10.0\\lib\\netcoreapp3.1\\OpenTelemetry.AutoInstrumentation.StartupHook.dll", "OTEL_DOTNET_AUTO_HOME": "%USERPROFILE%\\.nuget\\packages\\splunk.opentelemetry.autoinstrumentation\\1.9.0", "CORECLR_PROFILER_PATH_64": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x64\\native\\OpenTelemetry.AutoInstrumentation.Native.dll", "CORECLR_PROFILER_PATH_32": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x86\\native\\OpenTelemetry.AutoInstrumentation.Native.dll" } } } } SampleController.cs: using Microsoft.AspNetCore.Mvc; using System.Diagnostics; using System.Net.Http; namespace WebApplication3.Controllers { [ApiController] [Route("api/[controller]")] public class SampleController : ControllerBase { private readonly IHttpClientFactory _httpClientFactory; public SampleController(IHttpClientFactory httpClientFactory) { _httpClientFactory = httpClientFactory; } [HttpGet("execute")] public async Task<IActionResult> Execute() { await DoSomething(); var traceId = Activity.Current?.TraceId.ToString(); return Ok(new { message = "Request executed", traceId }); } [NonAction] public async Task DoSomething() { using var activity = new Activity("SampleController.DoSomething").Start(); // Internal HTTP call to another endpoint var response = await _httpClientFactory.CreateClient().GetAsync("https://jsonplaceholder.typicode.com/todos/1"); var content = await response.Content.ReadAsStringAsync(); Console.WriteLine(content); } } }   rapid response for Splunk Splunk Add-On for OpenTelemetry Collector 
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check wheth... See more...
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check whether my indexer is connected with license master or not. 2.  if NOT then how can i connect them again. 3. And if the connection is good from start then what do I do next?
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get rese... See more...
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get reset.     version: "3.6" networks: splunknet: driver: bridge attachable: true volumes: splunk-var: external: true splunk-etc: external: true services: splunk: networks: splunknet: aliases: - splunk image: xxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/splunk/splunk:latest container_name: splunk restart: always environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=password ports: - "80:8000" - "9997:9997" volumes: - splunk-var:/opt/splunk/var - splunk-etc:/opt/splunk/etc       The following is my conf.     [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = password requireClientCert = false          
After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that mi... See more...
After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that might hint any abnormality. Has anyone come across a similar issue? How can this issue be debugged?
Hi, there are some security saved search and key indicator in ES, if I activate these searches, if they trigger,  in which dashboard in ES i can see the result? For example if the search "Malware- to... See more...
Hi, there are some security saved search and key indicator in ES, if I activate these searches, if they trigger,  in which dashboard in ES i can see the result? For example if the search "Malware- total infection count " trigger,  in which dashboard in ES can I see the result? # ES # enterprise security
I am getting the data extracted and published to a dashboard, but the problem is that the "Count" is published on separate rows, not merged in with the other rows. I want the count (from which the pe... See more...
I am getting the data extracted and published to a dashboard, but the problem is that the "Count" is published on separate rows, not merged in with the other rows. I want the count (from which the percentage is calculated) to end up as an additional column together with the Percentage, Route and Method. This is the Signalflow I currently use: B = data('http_requests_total', filter=filter('k8s.namespace.name', 'customer-service-pages-prd')).count() A = data('http_requests_total', filter=filter('k8s.namespace.name', 'customer-service-pages-prd')).count(by=['route', 'method']) Percentage=(A/B * 100) Percentage.publish(label='Percentage') A.publish('Count') And this is how it looks: Any ideas on how to merge the data so that also Count is on the same rows as the Percentage?
Anyone please help me how to get Akamai logs to Splunk. We have clustered environment with syslog server uf installed in it and forwards data to our Deployment Server initially and then it deployes t... See more...
Anyone please help me how to get Akamai logs to Splunk. We have clustered environment with syslog server uf installed in it and forwards data to our Deployment Server initially and then it deployes to Cluster Manager and Deployer. We have 6 indexers with 2 indexers in each site (3 site multi cluster). 3 search heads one in each site. How to proceed with this?
In the practical Lab environment, how important is it to configure TLS on Splunk servers during the practical Lab. Do i get penalized for not securing UF-IDX traffic using SSL/TLS 
How do I configure the inputs.conf for  Ta_tshark TA_tshark (Network Input for Windows) | Splunkbase