All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Oh, that is odd. Just to check, the other data doesnt have a sourcetype of linux_messages_syslog?   
If you are installing this TA for monitoring DS itself then @richgalloway answer is correct. But if you are installing it for deploy it to some UF's, then you need to something else.
If its affecting things outside the dashboard then you could set it to only apply to links within your dashboard area with: div.dashboard a:focus { outline: none !important; box-shadow: none... See more...
If its affecting things outside the dashboard then you could set it to only apply to links within your dashboard area with: div.dashboard a:focus { outline: none !important; box-shadow: none !important; }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @fongpen  Are these errors appearing in the UI or Splunk internal logs? Its worth checking in _internal for any related logs, if you're able to find the API calls in the logs then look around th... See more...
Hi @fongpen  Are these errors appearing in the UI or Splunk internal logs? Its worth checking in _internal for any related logs, if you're able to find the API calls in the logs then look around these logs for any other failures that might suggest why the ticket number cannot be returned.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @thahir  This is for Classic XML dashboard not Dashboard Studio - its not possible to add custom jquery elements to dashboard studio dashboards.  Did this answer help you? If so, please consid... See more...
Hi @thahir  This is for Classic XML dashboard not Dashboard Studio - its not possible to add custom jquery elements to dashboard studio dashboards.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JH2  Are you able to share the JSON source for your dashboard so that I can check this for you? You should be able to edit the search and select the "Input" button (or from Dropdown in Splunk 1... See more...
Hi @JH2  Are you able to share the JSON source for your dashboard so that I can check this for you? You should be able to edit the search and select the "Input" button (or from Dropdown in Splunk 10.0) and then select your time picker from the dropdown:   Is this what you have done?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That command is for after making changes to serverclass.conf.  It won't help after installing an app on the DS. To reload configs, try this command http://<yoursplunkserver>:8000/en-US/debug/refres... See more...
That command is for after making changes to serverclass.conf.  It won't help after installing an app on the DS. To reload configs, try this command http://<yoursplunkserver>:8000/en-US/debug/refresh?entity=admin/transforms-lookup See also https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Configurationfilechangesthatrequirerestart#Splunk_Enterprise_changes_that_do_not_require_a_restart
@livehybrid  I checked on this today (been out for a couple days), and it IS filtering out all the Syslog that I wanted to drop, but for some reason it's dropping ALL the logs from that customer. I ... See more...
@livehybrid  I checked on this today (been out for a couple days), and it IS filtering out all the Syslog that I wanted to drop, but for some reason it's dropping ALL the logs from that customer. I undid the RULESET change for this and now I'm getting all the logs again. My only thought is that maybe the customer's HF is treating everything it's sending as syslog over the wire then unpacking it when it arrives.  Do you have any ideas? Thanks!
The issue came up again with only one alert. The app it's in had been set up with the fix to use request.ui_dispatch_app = search but when clicking the view results link in the email it was still goi... See more...
The issue came up again with only one alert. The app it's in had been set up with the fix to use request.ui_dispatch_app = search but when clicking the view results link in the email it was still going to the same app. I made the app visible and the page now loads.  In /apps/<app_name>/local/app.conf add: [ui] is_visible = true
Hi @danielbb  As mentioned by @richgalloway  config reload will be done automatically if app installed via UI, no need to explictly run the commad for config to take effect. however reload comm... See more...
Hi @danielbb  As mentioned by @richgalloway  config reload will be done automatically if app installed via UI, no need to explictly run the commad for config to take effect. however reload command for pushing apps to UF looks like below /opt/splunk/bin/splunk btool reload deploy-server -class <serverclassname>
Sounds great @richgalloway , how would the reload command would look like?
Apps are installed on a DS the same as on any standalone search head.  You can either use the reload or restart Splunk.  If you install the app from the UI then the reload is automatic, although some... See more...
Apps are installed on a DS the same as on any standalone search head.  You can either use the reload or restart Splunk.  If you install the app from the UI then the reload is automatic, although some apps require a restart (for which you will be prompted).
What would be the proper way to deploy the TA_nix on the deployment server, is the reload option available or do I need to bounce the server?
I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file ... See more...
I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file aren’t organized such that the records from a specific event are together, so I would end up having to correlate them on the back-end.  I’d rather use ausearch in a scripted input to correlate the records on the front end and provide clear delimiters (----) separating events before reporting them to the central server. Obviously, I don’t want the entirety of all logged events being reported each time the script is run.  I just want the script to report any new event since the last run.  I found that the "--checkpoint" parameter ought to be useful for that purpose. Here’s the script that I’m using: path_to_checkpoint=$( realpath "$( dirname "$0" )/../metadata/checkpoint" ) path_to_temp_file=$( realpath "$( dirname "$0" )/../metadata/temp" ) /usr/sbin/ausearch -–input-logs --checkpoint $path_to_checkpoint > $path_to_temp_file 2>&1 output_code="$?" chmod 777 $path_to_checkpoint if [ "$output_code" -eq "0" ]; then         cat $path_to_temp_file fi echo "" >> $path_to_temp_file date >> $path_to_temp_file echo "" >> $path_to_temp_file echo $output_code >> $path_to_temp_file It works just fine in the first round, when the checkpoint doesn’t exist yet and is generated for the first time, but in the second and all subsequent rounds, I get error code 10: invalid checkpoint data found in checkpoint file. It works fine in all rounds when I run the bash script manually from command line, so there isn’t any kind of a syntax error, and I’m not using the parameters incorrectly. Based on the fact that the first round runs without error, I know that there isn’t any kind of permissions issue with running “ausearch”. It works fine in all rounds when I run the bash script as a cronjob using crontab, so the fact that Scripted Inputs run like a scheduled service isn’t the root of the problem either. I’ve confirmed that the misbehavior is occurring in the interpretation of the checkpoint (rather than the generation of the checkpoint) by doing the following. Trial 1: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 2: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Trial 3: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 4: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Inference: The error only occurs when the Splunk Forwarder Scripted Input is interpreting the checkpoint regardless of how the checkpoint was generated, therefore the interpretation is where the misbehavior is taking place. I’m aware that I can include the "--start checkpoint" parameter to avoid this error by causing "ausearch" to start from the timestamp in the checkpoint file rather than look for a specific record to start from.  I’d like to avoid using that option though, because it causes the script to send duplicate records.  Any records that occurred at the timestamp recorded in the checkpoint are reported when that checkpoint was generated and also in the following execution of "ausearch".  If no events are logged by auditd between executions of "ausearch", then the same events may be reported several times until a new event does get logged. I tried adding the "-i" parameter to the command hoping that it would help interpret the checkpoint file, but it didn't make any difference. For reference, here's the format of the checkpoint file that is generated: dev=0xFD00 inode=1533366 output=<hostname> 1754410692.161:65665 0x46B I'm starting to wonder if it might be a line termination issue.  Like if the Splunk Universal Forwarder is expecting each line to terminate with a CRLF the way that it would in Windows, but instead it's seeing that the lines all end in LF because it's Linux.  I can't imagine why that would be the case since the version of Splunk Universal Forwarder that I have installed is meant for Linux, but that's the only thing that comes to mind. I'm using version 9.4.1 of the Splunk Universal Forwarder.  The Forwarder is acting as a deployment-client that installs and runs apps issued to it by a separate deployment-server that runs Splunk Enterprise version 9.1.8. Any thoughts on what it is about Splunk Universal Forwarder Scripted Inputs that might be preventing ausearch from interpreting its own checkpoint files?
Sorry for the late reply.  What that ends up giving me is: Unable to parse event_time_field='_time', check whether it is in epoch format. 0 results (7/1/25 12:00:00.000 AM to 8/1/25 12:00:00.000 A... See more...
Sorry for the late reply.  What that ends up giving me is: Unable to parse event_time_field='_time', check whether it is in epoch format. 0 results (7/1/25 12:00:00.000 AM to 8/1/25 12:00:00.000 AM) I tried manipulating it with generative AI some more and continued to hit roadblocks.       
Hi @JH2 ,   Can you check this below link    https://answers.splunk.com/answers/627432/jquery-datepicker-in-splunk.html
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here a... See more...
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here are the steps I have taken:  Created a search  Save "Search" as a new Dashboard studio  Dashboard studio automaticly adds the date picker to the dashboard.      I linked the data picker to the dashboard - by selecting "Sharing date range"    But still not working.     I must be missing something.     Thank you inadvance.     
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number... See more...
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number. Getting return code 201 with Curl command.   Version Splunk Add-on for servicenow 9.0.0 Splunk Cloud Version: 9.3.2411.112
Thank you @thahir, it's working. But when i try to navigate the page with TAB, the outlines are hidden... I'm taking your answer as correct, hoping no developer is going to hate me Thank you,  A... See more...
Thank you @thahir, it's working. But when i try to navigate the page with TAB, the outlines are hidden... I'm taking your answer as correct, hoping no developer is going to hate me Thank you,  AleCanzo.