All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @malimahesh25  Firstly, I just want to mention that it is generally not advised to run Splunk as root.  Regarding your issue - the reason that the inputs.conf is not being updated is that the au... See more...
Hi @malimahesh25  Firstly, I just want to mention that it is generally not advised to run Splunk as root.  Regarding your issue - the reason that the inputs.conf is not being updated is that the authentication to Splunk failed. Do you know your Splunk credentials for the forwarder? This is the Splunk admin auth user, NOT the system user credentials. If you do not know the password then you can reset it by following these steps: Find the passw file for your instance ($SPLUNK_HOME/etc/passwd) and rename it to passwd.bk Create a file named user-seed.conf in your $SPLUNK_HOME/etc/system/local/ directory. In the file add the following text: [user_info] USERNAME = admin PASSWORD = NEW_PASSWORD In the place of "NEW_PASSWORD" insert the password you would like to use. Restart Splunk After restarting Splunk you should now be able to run the command, logging in with the new credentials. For more info see https://docs.splunk.com/Documentation/Splunk/9.4.0/admin/User-seedconf Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi team,  I am unable to send logs to server by using "splunk add monitor <filename>" command with forwarder version 9.4.0 Splunk is running as root user. add monitor command is asking for credenti... See more...
Hi team,  I am unable to send logs to server by using "splunk add monitor <filename>" command with forwarder version 9.4.0 Splunk is running as root user. add monitor command is asking for credentials. And the inputs.conf file is not getting updated with the log file name that is added to monitor. sudo splunk add monitor Test.log Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R root:root /opt/splunkforwarder" Splunk username: Password: Login failed Tested with forwarder version 9.0.0 and it worked. That time also it asked for credentials but inputs.conf got updated and logs sent to server without providing the credentials. I want to send logs to server using forwarder 9.4.0 What changes should I do to make it work. Please suggest...
https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges "To delete an app that you previously pushed, remove it from the configuration bundle. When you next p... See more...
https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges "To delete an app that you previously pushed, remove it from the configuration bundle. When you next push the bundle, each member will delete it from its own file system. Note: If you need to remove an app, inspect its app.conf file to make sure that state = enabled. If state = disabled, the deployer will not remove the app even if you remove it from the configuration bundle."
I see what you're trying to do but be aware that whole process is flawed here. 1. You're relying on a file with 666 permissions. This way anyone can manipulate this file's contents in any way they s... See more...
I see what you're trying to do but be aware that whole process is flawed here. 1. You're relying on a file with 666 permissions. This way anyone can manipulate this file's contents in any way they see fit. They can remove contents, counting on race condition so that UF won't pick it up, they can inject any contents. And you're not able to tell whether it's legitimate or not. 2. The $PROMPT_COMMAND is only run at prompt time. Which means that when there is no prompt, the command is not run. And there is a lot of situations like that. 3. You're relying on bash being spawned as a shell and being the only shell for a user. That is not true. It's trivial to spawn any other shell or any other process in a non-tracked way. 4. You're also relying that startup scripts are run for a bash session. That also doesn't have to be true. (see your "su" case). So if it's your way of providing accountability... that's not gonna work very well. For that you're gonna need other tools. For example very limited sudo configuration. (with sudo you have logging included). Or a whole user session monitoring tool but that's completely out of scope here. If it's just so that you have some form of tracking what people are doing for future reference and to avoid situaitions like "how did we do that???", that might be a way to do so. In fact I'm doing a similar thing on my computers but I use logger in my $PROMPT_COMMAND so that it gets pushed to system-wide syslog. Yes, it also has some of the aforementioned issues but the log is a bit less easy to manipulate after it's been written to. As a side remark - you have several mistakes in your script. For example, your grep -q will find _any_ PROMPT_COMMAND and even if it's commented out or being just part of an echoed string. Possible issues with the file monitoring (
Please share your current SimpleXML source demonstrating the issue.
Hi, Thanks! This almost worked -  the alignment is now all rows including panel title, input text, html text and events.    Is there a way to leave title as is and have just input and html tags si... See more...
Hi, Thanks! This almost worked -  the alignment is now all rows including panel title, input text, html text and events.    Is there a way to leave title as is and have just input and html tags side by side with events coming down to both of them? Now all of them are in a row with title, input, html and events all are next to next.
For the first case, try something like this | append [| makeresults | addinfo | rename info_min_time as _time | fields _time | eval state="System unknown"] | sort 0 - _time | str... See more...
For the first case, try something like this | append [| makeresults | addinfo | rename info_min_time as _time | fields _time | eval state="System unknown"] | sort 0 - _time | streamstats last(state) as previousState window=1 current=f | eval state=if(state!="System unknown",state,if(previousState=="System Stop", "System Start", "System Stop"))
IHAC with an SVA C3 (On-Prem) setup running 9.4.0 on the MN, SHC, Deployer but 9.3.2 on the peers (upgrade in the works due to unsupported linux kernel 3.x). They've been running this way OK for abou... See more...
IHAC with an SVA C3 (On-Prem) setup running 9.4.0 on the MN, SHC, Deployer but 9.3.2 on the peers (upgrade in the works due to unsupported linux kernel 3.x). They've been running this way OK for about a month whilst the upgrade is pending. Start of issue The problem that is being seen is that the client wanted to disable the new 'audit_trail' app for platform confidentiality a week ago. They created a local folder for the app on the deployer ($SPLUNK_HOME/etc/shcluster/apps/audit_trail) and disabled it via a .conf file change, no issue worked ok and pushed to the SHC from the deployer. The SHC is all in sync. Symptom The issue now being seen is that they can't delete TA's and apps with pushes from the Deployer. For example they are removing legacy TA's and despite not being on the deployer they remain on the SHC. The cluster is operational and in sync OK and I have temporarily removed the 'audit_trail' workaround which allows the usual command to operate again: ./splunk apply shcluster-bundle -target <https://x.x.x.x:8089> -preserve-lookups true If not you have to include the switch (-push-default-apps true) Next steps: I'm trying to locate the correct component in index _internal to troubleshoot what is happening and why it is not deleting apps and TA's not on the Deployer Example: index="_internal" source="/opt/splunk/var/log/splunkd.log" host IN (SH, SH, SH, Deployer) I can't locate any warnings or relevant errors even when including the relevant TA being intended for removal on the short time period in question Any suggestions welcome        
Hi Will, Thanks! That kind of worked but is there a way to have the html within same panel next to input without creating another panel? Its sort of didn’t align the panels correctly.
@livehybrid Thanks for the quick reply  .  We are trying to avoid manual instrumentation in the code completely. I am trying to do a onetime setup so that whatever the methods that we want to trace... See more...
@livehybrid Thanks for the quick reply  .  We are trying to avoid manual instrumentation in the code completely. I am trying to do a onetime setup so that whatever the methods that we want to trace, I can add them through env variable OTEL_DOTNET_TRACES_METHODS_INCLUDE. Please suggest me if there is any other better approach where I can implement method level tracing in a better way to implement.   Regarding the error All the paths mentioned are correct and the dlls exists in those paths. We have checked the versions and all the versions exists in .nuget package. Could you please help me in resolving the issue.  "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.startuphook\\1.10.0\\lib\\netcoreapp3.1\\OpenTelemetry.AutoInstrumentation.StartupHook.dll", "OTEL_DOTNET_AUTO_HOME": "%USERPROFILE%\\.nuget\\packages\\splunk.opentelemetry.autoinstrumentation\\1.9.0", "CORECLR_PROFILER_PATH_64": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x64\\native\\OpenTelemetry.AutoInstrumentation.Native.dll", "CORECLR_PROFILER_PATH_32": "%USERPROFILE%\\.nuget\\packages\\opentelemetry.autoinstrumentation.runtime.native\\1.10.0\\runtimes\\win-x86\\native\\OpenTelemetry.AutoInstrumentation.Native.dll"
Hello @ITWhisperer , Thanks for asking! You are right.., It will be like, the next event will be received within 3 days, it wont take more time at wrost cases. I'm using those values in the ... See more...
Hello @ITWhisperer , Thanks for asking! You are right.., It will be like, the next event will be received within 3 days, it wont take more time at wrost cases. I'm using those values in the chart, when we are searching with less time range, I can't see the logs of the timerange in that time range because of the gap in logs,  I have listed two scenarious, As per the scenario1, The perevious value is just a opposite value of the next one.  Scenario 2 is bit hard, having multiple values, which can be generated before 3 days at wrost cases. Thansk!
Just to say this isn't possible (to reference the readme file direct) from the UI. You'd have to do it from AppServer or as a view in the data folder thus duplicating the file and effort. would have ... See more...
Just to say this isn't possible (to reference the readme file direct) from the UI. You'd have to do it from AppServer or as a view in the data folder thus duplicating the file and effort. would have been a nice option.
How does Splunk know what the previous state was unless it is included in the search? For example, if the first state is "System Stop" and the system was reset 3 days, or 3 weeks, or 3 months ago, w... See more...
How does Splunk know what the previous state was unless it is included in the search? For example, if the first state is "System Stop" and the system was reset 3 days, or 3 weeks, or 3 months ago, what do you want Splunk to report?
Hello @ITWhisperer , Thanks for your reply. 17:54:01 - System reset 22:09:04 - System Stop 23:01:01 - System Started 01:01:01 - System Stop In case of from 21:00, I need to take as System... See more...
Hello @ITWhisperer , Thanks for your reply. 17:54:01 - System reset 22:09:04 - System Stop 23:01:01 - System Started 01:01:01 - System Stop In case of from 21:00, I need to take as System reset and followed by other values. Actually I just need to fill the value, even the logs weren't there in teh selcted timerange. Thanks!
Hi @salikovsky  Regarding SU Behaviour - when the expected logs are missing, are you able to confirm that they are present in the /var/log/bash_history.log file and only missing from Splunk. Also, ... See more...
Hi @salikovsky  Regarding SU Behaviour - when the expected logs are missing, are you able to confirm that they are present in the /var/log/bash_history.log file and only missing from Splunk. Also, run the following search and check that each of the UFs is correctly configured to monitor the files: index=_internal TailingProcessor "/var/log/bash_history.log" You should see something like 02-21-2025 13:47:29.836 +0000 INFO TailingProcessor [1354 MainTailingThread] - Parsing configuration stanza: monitor:///var/log/bash_history.log. 02-21-2025 13:47:29.836 +0000 INFO TailingProcessor [1354 MainTailingThread] - Adding watch on path: /var/log/bash_history.log. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Please clarify what you want Splunk to assume in the second case, for example, if the search was from 21:00, would you want Splunk to assume the previous state was "System reset" or "System Start"? ... See more...
Please clarify what you want Splunk to assume in the second case, for example, if the search was from 21:00, would you want Splunk to assume the previous state was "System reset" or "System Start"? Do you want to search for a longer period of time to try and find the previous state, and then remove these results from the chart?
Hi @smanojkumar  I may have misunderstood, but If you want the search to include the event at 6AM then you will need to change the earliest time within the search to cover this event.  Feel free to... See more...
Hi @smanojkumar  I may have misunderstood, but If you want the search to include the event at 6AM then you will need to change the earliest time within the search to cover this event.  Feel free to share a screenshot example of what you are seeing to help explain the difference to your expectation/intention. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Try something like this <dashboard version="1.1" theme="light"> <label>Events</label> <row> <panel id="panel1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged="true"> ... See more...
Try something like this <dashboard version="1.1" theme="light"> <label>Events</label> <row> <panel id="panel1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged="true"> <label>Refine further?</label> <prefix> | where </prefix> </input> <html id="htmlid"> <p>count: $job.resultCount$</p> </html> <event> <search> <query>index=_internal</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> </event> </panel> <panel depends="$alwaysHide$"> <html> <style> #panel1 .dashboard-panel { display: flex; flex-direction: row; flex-wrap: wrap; } </style> </html> </panel> </row> </dashboard>
@salikovsky  Check UF Logs: tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log | grep -i bash_history File truncation, rotations, or buffering delays could be preventing logs from reaching Sp... See more...
@salikovsky  Check UF Logs: tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log | grep -i bash_history File truncation, rotations, or buffering delays could be preventing logs from reaching Splunk. Run on affected hosts: /opt/splunkforwarder/bin/splunk list monitor Ensure /var/log/bash_history.log is listed.  
Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 1... See more...
Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 17:01:01 - System Stop These are the logs. Lets say If I'm searchit it in a chart, for the timerange from 7Am - 4Pm the chart from 8Am until 10:54:01 Am is empty since the previous event was generated at 6:01:01, so there is a gap. I would like to fix this. In some cases only 2 values is been repeated, so we can take the one in present, the past can be its opposite. Eg -  At 10:54:01 - System Start, We have received this log, where the system is start, the previous one will be stop.  These are fixed for some cased, I need two best solutions, only for this scenario, other for multiple values, like these 14:01:01 - System Started 17:54:01 - System reset 22:09:04 - System Stop 23:01:01 - System Started 01:01:01 - System Stop wheres here I'm getting three values like Started, Stop and reset. Thanks in Advance!