All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" lau... See more...
    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" launching a while loop to wait with a "test -f file" if "file" was generated during the STARTTIME untill the next "timeout [variable_secs]" (variable_secs is taken by a table, every file has it's own variable_secs). If timeout exits with exit_code 124, an stdout + log entry is written in a log file. If a new script is launched with same identical args, itsself check if a previous one is still running and exit suddenly, waiting the previous to do its job. So i have a single input entry for each file in table (file exists or file does not exists after start_time [the table also have its start_time variable for every file] + variable_secs. During the "variable_secs" if i had a new file in table to check, the script is blocked from the previously, so i can't check it. Let's say, having a table like this server1 /tmp/flow.log 07:00 07:30 server1 /tmp/flow2.log 07:10 07:15   Scheduled script run by splunkd is scheduled every 5m. Let's say now it's 06:55, 06:55 splunkd run script, it exits with no output/file log since check it's not 07:00 OR 07:10 (script does this) 07:00 splunkd run script, task starts for "/tmp/flow.log" and wait untill 07:30 for file generation 07:05 splunkd run script, aborted since the previous 07:00 is still in background and running 07:10 same as 07:05, "/tmp/flow2.log" is skipped 07:15 same as 07:10, "/tmp/flow2.log" is skipped 07:20 same as 06:55 ... So "/tmp/flow2.log" is totally skipped. Now, on some servers, as said, a cron was used. On other servers i rewrite the script without the timeout/sleep, and write an entry every 5m with a variable "FOUND=[0|1]", and then stating with SPL a "stats count sum(FOUND) as found by host,file" and with some dashboards/alerts who traces them, a sum of 0 in that timerange is a file not present.
I know it's a unusual question, for this reason i asked THAT is the way i'm using now, getting data from input monitor. For this reason i wanted to know if i could manage it by splunkd direc... See more...
I know it's a unusual question, for this reason i asked THAT is the way i'm using now, getting data from input monitor. For this reason i wanted to know if i could manage it by splunkd directly I also used on some servers the /etc/cron.hourly/ directly by splunkd, creating tasks dinamically on the fly by launching a root subshell to create task in cron.hourly and getting its inputs monitor output to file after the timeout/sleep does it's job and the script exits with its exit code. But the mission was to NOT TOUCH ANYTHING IN THE SYSTEM, so i asked if a monitor script could be forced to rerun also if the previous was still in background 🤷‍ thanks anyway everyone
1) You can easily test out using emulation that the right-side values override left-side value. 2) I do not usually use keepempty.  If you have reproducible evidence that dedup does not behave as do... See more...
1) You can easily test out using emulation that the right-side values override left-side value. 2) I do not usually use keepempty.  If you have reproducible evidence that dedup does not behave as documented, maybe engage support.
We're all about options, so here's a prototype designed to work with inputs.conf interval = 0 (run at startup and re-run on exit) or interval = -1 (run once at startup): #!/bin/bash function cleanu... See more...
We're all about options, so here's a prototype designed to work with inputs.conf interval = 0 (run at startup and re-run on exit) or interval = -1 (run once at startup): #!/bin/bash function cleanup_script() { # script cleanup for worker in $(jobs -p) do echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: killing worker ${worker}" kill "${worker}" done echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: finish script" } function work() { function cleanup_work() { # work cleanup echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: finish work" } trap "cleanup_work" RETURN EXIT ABRT echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: start work" # do something sleep 90 # do something else return } trap "cleanup_script" EXIT ABRT BASENAME=$(basename -- "$0") echo "$(date +"%b %e %H:%M:%S") $(hostname) ${BASENAME}[${BASHPID}]: start script" while : do work & sleep 60 done Splunk will index stdout as in your original script. In this example, I'm writing BSD syslog style messages. Stopping Splunk or disabling the input at runtime will send SIGABRT to the parent process. Note that stdout won't be captured by Splunk at this point. If you need those messages, write them to a log file.
We can't help you fix what's wrong without knowing what is wrong.  "Red health" could be caused by any of many things (or multiple things).  Click on the read health icon to show the health status an... See more...
We can't help you fix what's wrong without knowing what is wrong.  "Red health" could be caused by any of many things (or multiple things).  Click on the read health icon to show the health status and then click on each red icon to get details about what is wrong (or what Splunk thinks is wrong - sometimes it's wrong about that).  Pass on that information and someone should be able to help you fix it.
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marke... See more...
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marked in red.    splunkd   Data Forwarding Splunk-2-Splunk Forwarding TCPOutAutoLB-0 File Monitor Input Ingestion Latency Real-time Reader-0     also I'm currently am taking some online courses from Udemy. would anyone  recommend anything else or where to learn?  I'm only asking because these courses are out of date.     
Thank You So much. Got the answer.  
To me, it depends on whether or not cron is centrally managed. If the Splunk administrator has access to the tools or teams that manage cron, then it may be preferred. cron, its derivatives, and Spl... See more...
To me, it depends on whether or not cron is centrally managed. If the Splunk administrator has access to the tools or teams that manage cron, then it may be preferred. cron, its derivatives, and Splunk are all very poor schedulers overall. I always recommend a full-featured scheduler or workload automation tool, but they can be cost and resource prohibitive. logger was just an example.
One caveat: Top-level nodes without children are displayed as leaves, and the ordering could use some work. I don't know whether the viz supports an undocumented sort field similar to how tree visual... See more...
One caveat: Top-level nodes without children are displayed as leaves, and the ordering could use some work. I don't know whether the viz supports an undocumented sort field similar to how tree visualizations in most UI toolkits support a sorting callback; I haven't looked at the source code.
Hi @gcusello, I started with the PCF Excel workbook published at https://www.apqc.org/resource-library/resource-listing/apqc-process-classification-framework-pcf-cross-industry-excel-11. I exported... See more...
Hi @gcusello, I started with the PCF Excel workbook published at https://www.apqc.org/resource-library/resource-listing/apqc-process-classification-framework-pcf-cross-industry-excel-11. I exported the Combined sheet to a CSV file named pcf_combined.csv and uploaded the file to my Splunk instance as a new lookup file with the same name. I started with the following search: | inputlookup pcf_combined.csv | eval id='Hierarchy ID' | eval label='PCF ID'." - ".'Hierarchy ID'." ".Name | rex field=id "(?<parentId>[^.]+\\..+)\\." | table id label parentId The regular expression only extracts a parentId value for layer 3 and lower, i.e. x.y has a null parentId value, x.y.z has a parentId value of x.y, x.y.z.w has a parentId value of x.y.z, etc. Hierarchy ID values are unordered. To allow Treeview Viz to sort nodes more naturally, I modified the label field: | eval label='Hierarchy ID'." ".Name." [".'PCF ID'."]" The resulting visualization correctly displays all nodes: I'm running Splunk Enterprise 9.3.0 and Treeview Viz 1.6.0.
It doesn't have to be current search time. It might be the time from summarized values. A relatively good example would be tracing emails from some email systems. They tend to send multiple events du... See more...
It doesn't have to be current search time. It might be the time from summarized values. A relatively good example would be tracing emails from some email systems. They tend to send multiple events during a single message pass and you have to combine all those message to have a full picture of the message sender, recipients, action taken, scan results and so on. With an ad-hoc search you'd probably have to use transaction command which doesn't play nice with bigger data sets. But you can run a summarizing search every 10 or 30 minutes that will correlate all emails processed during given time window and write that summarized info into an index. I  such case you'd probably want one of the message's times (most probably an initial submission time) as a summary event's _time.
Actually, I find this even more complicated than a stand-alone cron-launched solution. I'm saying this as a seasoned admin. It is very "inconsistent". It is spawned by splunk, emits syslog and of c... See more...
Actually, I find this even more complicated than a stand-alone cron-launched solution. I'm saying this as a seasoned admin. It is very "inconsistent". It is spawned by splunk, emits syslog and of course each distro handles syslog differently. While it is tempting to use Splunk's internal scheduler, I'd rather advise using the system-wide cron and explicitly created log files. It's more obvious this way. Anyway, a question to @verbal_666 - why the need to delay the script's output in the first place? It seems a very unusual requirement.
Hi, Did you check your kVStore?  Few lookups are there related to Incident review verify them too.
Hi @jacksonDeng, Have you tried entering dummy values?
It is not mandatory to use that app, but unless you want to build the configurations yourself, you might as well use a pre-made app. The app has documentation describing how to install and use it: ht... See more...
It is not mandatory to use that app, but unless you want to build the configurations yourself, you might as well use a pre-made app. The app has documentation describing how to install and use it: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Usecustomalertactions
Your first and second screenshots show, respectively, a dropdown menu from Splunk SOAR (the SOAR product from Splunk), and a dropdown menu from Splunk Enterprise (the SIEM product from Splunk). These... See more...
Your first and second screenshots show, respectively, a dropdown menu from Splunk SOAR (the SOAR product from Splunk), and a dropdown menu from Splunk Enterprise (the SIEM product from Splunk). These are entirely different products. I would recommend installing Splunk SOAR and then its interface should look like your first screenshot and you should then be able to make an Automation user.
Can you share your experience (use case) where you change your timestamp to current search time? Thank you!!
It depends on what you expect your users to expect. I have use cases for summary indexes which contain statistics on X days of previous data, but my users assume that a summarized event of weekly sta... See more...
It depends on what you expect your users to expect. I have use cases for summary indexes which contain statistics on X days of previous data, but my users assume that a summarized event of weekly statistics on the 15th of September would contain statistics about 8-15th September. In this case it makes sense to re-eval the _time value to the search time.
Hi @yuanliu  1) If I left join CSV and subsearch that have the same field name, will the data from subsearch rewrite the data from the CSV in that field? In my example above is the "source" fie... See more...
Hi @yuanliu  1) If I left join CSV and subsearch that have the same field name, will the data from subsearch rewrite the data from the CSV in that field? In my example above is the "source" field. I added this for tracking purposes.  2) I also found out that keepempty=true doesn't always work in dedup. Have you ever experienced the same? Thank you again for your help.
It depends on what you are using the summary index for and what you want the timestamp to represent. There is no right way or wrong way, it is a choice you make based on your usecases for the data in... See more...
It depends on what you are using the summary index for and what you want the timestamp to represent. There is no right way or wrong way, it is a choice you make based on your usecases for the data in the summary index