All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This looks like JSON of sorts - have you considered treating it as such? In the meantime, you could use rex mode=sed | rex mode=sed "s/\"Felid\d\"://g"
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises... See more...
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises. What steps should I take with my HF and DS to ensure smooth log ingestion into the DR Splunk Cloud instance? I have considered replicate vm ( HF and DS) as a possible solution, but I am still not sure about the best approach. Please advise on the following: - Are there any specific licensing requirements or restrictions for replicating Splunk instances? - What are the potential performance implications of replicating a Splunk VM, especially considering the data volume and real-time or near real-time requirements? - Are there any recommended best practices or configurations for replicating HF and DS VMs to a DR environment?" Thank for your help.
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tes... See more...
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tested your props.conf suggestion but still observe the same behavior that was described in the OP.  
Hi,   Check for below, if you get no results, then we can check further. index=notable | inputlookup es_notable_events | inputlookup incident_review_lookup    
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for sub... See more...
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for subsequent requests as follows: oRequest.Headers.Authorization = new AuthenticationHeaderValue("Splunk", connectionInfo.AccessToken); When I received 401 error code after called 'services/search/jobs/' , I attempt to reconnect by calling 'services/auth/login' up to three times to retrieve a new session key and update the header accordingly. Despite this, the session key sometimes remains unchanged (is this expected behavior?), and regardless of whether the token changes or not, I continue to receive the 401 Unauthorized error: Response: '<?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> ' Error from System.Net.Http: System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized). The URL I'm using starts with https and the port is 8089. Can you assist with this issue?
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards r... See more...
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards related to notables and incident review stopped working (threw and error regarding the dashboard not available) So there should be a direct relation between two. Enabled KV and everything went back to normal (except I still have no notables stored) I've been trying to look for issues in mongo logs but nothing so far. Can you please direct me towards other possible places in KVStore (or similar) to look and investigate? 
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using p... See more...
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using previous data pattern like below. DATA1,DATA2,DATA3,DATA4 Looking for a way to filter out or suppress {"Feild1": "Feild2":.....} using splunk query's and feed output to dashboards.   Kindly suggest how this can be done.   Thanks  
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the re... See more...
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the resources of your indexers using the Monitoring Console? Please check if the resources are fully used. Then, you could try to configure the parallel pipeline on your indexers, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Pipelinesets you could try to use the value  parallelIngestionPipelines = 2 in the General stanza of server.conf, in this way you better use your hardware resources. Ciao. Giuseppe
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second c... See more...
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second choice could be that you aren't in the app where it's located. If not check in savedsearches.conf files where it's located and maybe it's saved with a different name. If you don't find it, are you sure that you saved it? Ciao. Giuseppe
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such ... See more...
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such stuff from within Splunk itself would be to simply create multiple inputs. If you want to spawn 2-minute long jobs every minute, you can just create two (or better yet - three so that you're sure there's no voverlap) separate inputs. One is running */3. Another one is 1,4,7,10..., and another one is 2,5,8,11... Ugly, but should work.
You might not have permissions to see it.
Hi , I have a saved search which is cron scheduled , but it is not showing on the saved search panel . (setting->Searches,report and alerts) what could be the reason.
    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" lau... See more...
    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" launching a while loop to wait with a "test -f file" if "file" was generated during the STARTTIME untill the next "timeout [variable_secs]" (variable_secs is taken by a table, every file has it's own variable_secs). If timeout exits with exit_code 124, an stdout + log entry is written in a log file. If a new script is launched with same identical args, itsself check if a previous one is still running and exit suddenly, waiting the previous to do its job. So i have a single input entry for each file in table (file exists or file does not exists after start_time [the table also have its start_time variable for every file] + variable_secs. During the "variable_secs" if i had a new file in table to check, the script is blocked from the previously, so i can't check it. Let's say, having a table like this server1 /tmp/flow.log 07:00 07:30 server1 /tmp/flow2.log 07:10 07:15   Scheduled script run by splunkd is scheduled every 5m. Let's say now it's 06:55, 06:55 splunkd run script, it exits with no output/file log since check it's not 07:00 OR 07:10 (script does this) 07:00 splunkd run script, task starts for "/tmp/flow.log" and wait untill 07:30 for file generation 07:05 splunkd run script, aborted since the previous 07:00 is still in background and running 07:10 same as 07:05, "/tmp/flow2.log" is skipped 07:15 same as 07:10, "/tmp/flow2.log" is skipped 07:20 same as 06:55 ... So "/tmp/flow2.log" is totally skipped. Now, on some servers, as said, a cron was used. On other servers i rewrite the script without the timeout/sleep, and write an entry every 5m with a variable "FOUND=[0|1]", and then stating with SPL a "stats count sum(FOUND) as found by host,file" and with some dashboards/alerts who traces them, a sum of 0 in that timerange is a file not present.
I know it's a unusual question, for this reason i asked THAT is the way i'm using now, getting data from input monitor. For this reason i wanted to know if i could manage it by splunkd direc... See more...
I know it's a unusual question, for this reason i asked THAT is the way i'm using now, getting data from input monitor. For this reason i wanted to know if i could manage it by splunkd directly I also used on some servers the /etc/cron.hourly/ directly by splunkd, creating tasks dinamically on the fly by launching a root subshell to create task in cron.hourly and getting its inputs monitor output to file after the timeout/sleep does it's job and the script exits with its exit code. But the mission was to NOT TOUCH ANYTHING IN THE SYSTEM, so i asked if a monitor script could be forced to rerun also if the previous was still in background 🤷‍ thanks anyway everyone
1) You can easily test out using emulation that the right-side values override left-side value. 2) I do not usually use keepempty.  If you have reproducible evidence that dedup does not behave as do... See more...
1) You can easily test out using emulation that the right-side values override left-side value. 2) I do not usually use keepempty.  If you have reproducible evidence that dedup does not behave as documented, maybe engage support.
We're all about options, so here's a prototype designed to work with inputs.conf interval = 0 (run at startup and re-run on exit) or interval = -1 (run once at startup): #!/bin/bash function cleanu... See more...
We're all about options, so here's a prototype designed to work with inputs.conf interval = 0 (run at startup and re-run on exit) or interval = -1 (run once at startup): #!/bin/bash function cleanup_script() { # script cleanup for worker in $(jobs -p) do echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: killing worker ${worker}" kill "${worker}" done echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: finish script" } function work() { function cleanup_work() { # work cleanup echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: finish work" } trap "cleanup_work" RETURN EXIT ABRT echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: start work" # do something sleep 90 # do something else return } trap "cleanup_script" EXIT ABRT BASENAME=$(basename -- "$0") echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: start script" while : do work & sleep 60 done Splunk will index stdout as in your original script. In this example, I'm writing BSD syslog style messages. Stopping Splunk or disabling the input at runtime will send SIGABRT to the parent process. Note that stdout won't be captured by Splunk at this point. If you need those messages, write them to a log file.
We can't help you fix what's wrong without knowing what is wrong.  "Red health" could be caused by any of many things (or multiple things).  Click on the read health icon to show the health status an... See more...
We can't help you fix what's wrong without knowing what is wrong.  "Red health" could be caused by any of many things (or multiple things).  Click on the read health icon to show the health status and then click on each red icon to get details about what is wrong (or what Splunk thinks is wrong - sometimes it's wrong about that).  Pass on that information and someone should be able to help you fix it.
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marke... See more...
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marked in red.    splunkd   Data Forwarding Splunk-2-Splunk Forwarding TCPOutAutoLB-0 File Monitor Input Ingestion Latency Real-time Reader-0     also I'm currently am taking some online courses from Udemy. would anyone  recommend anything else or where to learn?  I'm only asking because these courses are out of date.     
Thank You So much. Got the answer.  
To me, it depends on whether or not cron is centrally managed. If the Splunk administrator has access to the tools or teams that manage cron, then it may be preferred. cron, its derivatives, and Spl... See more...
To me, it depends on whether or not cron is centrally managed. If the Splunk administrator has access to the tools or teams that manage cron, then it may be preferred. cron, its derivatives, and Splunk are all very poor schedulers overall. I always recommend a full-featured scheduler or workload automation tool, but they can be cost and resource prohibitive. logger was just an example.