All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a table with some basic fields , where the events represent items that need to have an action taken. I would like to have a drop down menu as the last item in the list with the desired action.... See more...
I have a table with some basic fields , where the events represent items that need to have an action taken. I would like to have a drop down menu as the last item in the list with the desired action. The action to be taken could be a link to an external page, or even another splunk search that just has to be executed, but not displayed.  For example: Action 1 or Action 2 would result in either a separate Search Being Run, or a separate link being "clicked". In both cases I'd like the drop down to be replaced with a static image, like a checkmark. (Indicating which action had been taken.) I realize this is pretty custom stuff that's going to require javascript or something else, I just have no idea where to start. Has anybody else tried to do anything like this? Thanks!
The documenation says that loggers that implement ILogger are supported, but no where does it describe how to capture those messages.  Our on-prem controller is version 21.4 and our agents are all 22... See more...
The documenation says that loggers that implement ILogger are supported, but no where does it describe how to capture those messages.  Our on-prem controller is version 21.4 and our agents are all 22.1 or later.  We have the .net Microsoft.Extensions.Logging.Console logger setup and it is outputting messages.  How do we configure AppD to capture those messages?
I had a situation where I wanted to know if the mstats p90(cpu) over 5 minutes of a host was above a certain value; but needed to extend it to 10 minutes for some hosts. I figured rather than make tw... See more...
I had a situation where I wanted to know if the mstats p90(cpu) over 5 minutes of a host was above a certain value; but needed to extend it to 10 minutes for some hosts. I figured rather than make two searches I could use span=5m and search back 10 minutes: (Search Window: -10m@m to @m) | mstats p90(_value) AS p90A WHERE metric_name="Processor.%_Processor_Time" AND instance="_Total" BY host span=5m Except this was often producing 3 events per host, because unless I'm mistaken  mstats span always aligns to UTC 0, so if I'm running the search on a minute not divisible by 5 (say every 3 minutes) I'll end up with 3 data points per host instead of 2. So I thought, maybe using prestats + bin + stats will work; I can get 10 samples and use bin aligntime=earliest to force them to just 2 time bins. I think this works, a quick check says that the P90 values are the same up until 4 decimal places if the times are aligned: (Search Window: @h-10m to @h) | mstats p90(_value) AS p90A WHERE metric_name="Processor.%_Processor_Time" AND instance="_Total" BY host span=5m | join host, _time [| mstats p90(_value) prestats=true WHERE metric_name="Processor.%_Processor_Time" AND instance="_Total" BY host span=1m | bin _time span=5m aligntime=earliest | stats p90(_value) AS p90B BY host, _time ] | where round(p90A,4) != round(p90B,4) So this search should work for any two 5 minute intervals aligned to any minute of the day. | mstats p90(_value) prestats=true WHERE metric_name="Processor.%_Processor_Time" AND instance="_Total" BY host span=1m | bin _time span=5m aligntime=earliest | stats p90(_value) AS p90 BY host, _time | where p90 > 80 | stats list(p90), count by host | where count == 2 OR match(host, "prod") I ended up not needing it when I realized my alert was already locked to every 5 minutes. Has anyone else tried doing this? Know a better way without creating two searches?
Hello,   I am trying to find the list of elapsed time over a specific time using our os process sourcetype. Looks something like this index=os sourcetype=ps host=* COMMAND=* | where ELAPSED > "1... See more...
Hello,   I am trying to find the list of elapsed time over a specific time using our os process sourcetype. Looks something like this index=os sourcetype=ps host=* COMMAND=* | where ELAPSED > "12:59:59" | table COMMAND ELAPSED _time  But for some reason, the ELAPSED time is still displaying values under this time.   If the ELAPSED Time goes over a day, I am able to filter that out with the where command. Example:  | where ELAPSED > "60-12:59:59" | table COMMAND ELAPSED _time -> Output will give me the results which are older than 60 days, 12:59:59 hours.
Hi all,  I have a JSON payload that contains as 'custom_fields' section that is made up of a set of title:keyname and value:value. This is because the tool has varying formats of key/value p... See more...
Hi all,  I have a JSON payload that contains as 'custom_fields' section that is made up of a set of title:keyname and value:value. This is because the tool has varying formats of key/value pairs that it outputs.  Per the screenshot, i'm trying to figure out a way to extract any values where the key is title:Mode but can't for the life of me work out how to do it.  The Mode key will have 1 of 2 values (Monitored or Remediated) and wanting to show this in a table that includes other values from the overall json packet (e.g. _time, description etc.) Any ideas would be greatly appreciated as spent several hours trying!
I have a dashboard with a timeframe dropdown created by a token called "query_time".  I'm trying to create a panel to show the availability of a given service/process over whatever period of time is ... See more...
I have a dashboard with a timeframe dropdown created by a token called "query_time".  I'm trying to create a panel to show the availability of a given service/process over whatever period of time is selected in the "query_time" dropdown (I have my ps.sh running every 1800 seconds btw, hence the 1800 shown).  I thought it would be as simple as subtracting the two, as shown below, but I'm getting an error whenever I incorporate the $query_time.earliest$ into the query.  It seems to play nice if I were to use just $query_time.latest$ though.  Is there any way to get this time difference in the query without it erroring out?  Thanks in advance!       <panel> <chart> <title>rhnsd Availability</title> <search> <query>index=os host="mydb1" sourcetype=ps rhnsd | stats count | eval availability=count/(($query_time.latest$-$query_time.earliest$)/1800)*100) | fields availability</query> <earliest>$query_time.earliest$</earliest> <latest>$query_time.latest$</latest> </search> <option name="charting.axisY.maximumNumber">100</option> <option name="charting.chart">fillerGauge</option> <option name="charting.chart.rangeValues">[0,90,100]</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.gaugeColors">["0xdc4e41","0x53a051"]</option> <option name="refresh.display">progressbar</option> </chart> </panel>        
Currently I have a search query that will show when an event happens with the device_id, count, and the device name. The search is set up to count when an event happens, but I also want to know when ... See more...
Currently I have a search query that will show when an event happens with the device_id, count, and the device name. The search is set up to count when an event happens, but I also want to know when the event doesn't happen, so it counts devices with 0 count. Here is my search: sourcetype="transactions" AND (additionalMessage.requestUrl="*/cashIn/initialize" OR additionalMessage.requestUrl="*/cashIn/update" OR additionalMessage.requestUrl="*/cashIn/updateStatus" OR additionalMessage.requestUrl="*/cashIn/finalize") AND message != "Token time nonce*" message="POST - http://transactions/cashIn/finalize  - RESPONSE_SENT" |rename additionalMessage.requestBody.deviceId as device_id |stats count(message) by device_id |sort -count(message) |lookup DeviceNamesAll.csv device_id OUTPUT device_name Search will show this: device_id count(message) device_name 0297f12-e0ac-40d6-8ff5-2d2c2787b 45 Store12 37ca5c1-2c3f-41d-88d4-57f8b354c4 41 Store54   I cant figure out how to also count the device_id's that have a count of 0. If anyone could help it would be greatly appreciated!
Good Afternoon, TLDR; Can a search query result that provides more than 1 field be outputted to a file with a command like outputlookup and have its multiple fields compared against for later usa... See more...
Good Afternoon, TLDR; Can a search query result that provides more than 1 field be outputted to a file with a command like outputlookup and have its multiple fields compared against for later usage? If so, how? How to create an optimal dashboard that identifies new domains via dns queries whether utilizing the .csv file or another way?  I am attempting to make a dashboard that will display newly-observed/newly-registered domains. From what I believe to be the most efficient method (and please feel free to correct me or provide an alternate solution), I need to make a search query that will establish a baseline and output it to a .csv file. Then create a second query that will actually create the dashboard that compares new results to that .csv file. Here's what I have so far: Step 1 - Create the .csv file index=nsm tag=dns query=* message_type=QUERY src_ip="10.20.30.*" | dedup query | stats earliest(_time) as FirstAppearance count by src_ip | fieldformat FirstAppearance=strftime(FirstAppearance, "%x %X") This current query produces this output: src_ip                 FirstAppearance      count 10.20.30.40     01/01/2001              782 What I want it to produce for the .csv file is the src_ip and the associated queries that go along with it. Example: src_ip                 query 10.20.30.40     www<.>google<.>com                               www<.>youtube<.>com 10.20.30.41      www<.>news<.>com Step 2 - Create the dashboard that will compare new results searched to the .csv file Once the dashboard is created, I know I'll have to include the command inputlookup which will look into the .csv created. My question is how do I make that comparison, and how do I create the query in a way that will display accurately in the Dashboard? Any information would greatly be appreciated.
| chart count over date_month by seriesName  , I have a search that display counts over month by seriesname . but instead of this count i need to display average of the count over month by series nam... See more...
| chart count over date_month by seriesName  , I have a search that display counts over month by seriesname . but instead of this count i need to display average of the count over month by series name ..    date_month seriesName 1 seriesName 2 seriesName 3   1 march % % % 2 feb % % %
hi I need to use eval count in a search like this       | chart count(eval(web > 12))       But this count is right if I filter events préviously from a string what I would like ... See more...
hi I need to use eval count in a search like this       | chart count(eval(web > 12))       But this count is right if I filter events préviously from a string what I would like to do is something like this       | chart count(eval(web > 12 AND TOTO=a))       NB: I know I can filter before the chart command but its impossible here because my chart command stats a lot of different events... How to do this please? Rgds
Hi, How do I add an addition numeric value to the show source dropdown list in version 8.1.6. I would like to add 2000. By default max is 1000. In version 7.3.5 is was just a matter of adding ano... See more...
Hi, How do I add an addition numeric value to the show source dropdown list in version 8.1.6. I would like to add 2000. By default max is 1000. In version 7.3.5 is was just a matter of adding another line to the xml with 2000. But in 8.1.6 the xml looks like this : <?xml version="1.0"?> <view template="pages/app.html" type="html" isDashboard="False"> <label>Show Source</label> </view>   7.3.5 looked like this : <view isVisible="false" template="search.html" isDashboard="False"> <label>Show source</label> <module name="AccountBar" layoutPanel="appHeader"> <param name="mode">popup</param> </module> <module name="Message" layoutPanel="messaging"> <param name="filter">*</param> <param name="clearOnJobDispatch">True</param> <param name="maxSize">1</param> <module name="SoftWrap" layoutPanel="pageControls"> <param name="enable">False</param> <module name="Count" layoutPanel="pageControls"> <param name="options"> <list> <param name="text">25</param> <param name="value">25</param> </list> <list> <param name="text">50</param> <param name="selected">True</param> <param name="value">50</param> </list> <list> <param name="text">100</param> <param name="value">100</param> </list> <list> <param name="text">200</param> <param name="value">200</param> </list> <list> <param name="text">500</param> <param name="value">500</param> </list> <list> <param name="text">1000</param> <param name="value">1000</param> </list> <list> <param name="text">2000</param> <param name="value">2000</param> </list> </param> <module name="ShowSource" layoutPanel="resultsAreaLeft"> </module> </module> </module> </module> </view>
Hi all, Please help with the below.  I am using rlog.sh (inbuilt script) provided by Splunk in TA-unix package , to apply ausearch utility for linux audit logs.     SEEK_FILE=$SPLUNK_HOME/var... See more...
Hi all, Please help with the below.  I am using rlog.sh (inbuilt script) provided by Splunk in TA-unix package , to apply ausearch utility for linux audit logs.     SEEK_FILE=$SPLUNK_HOME/var/run/splunk/unix_audit_seekfile_model_prod AUDIT_FILE=/opt/splunklogs_app/audit_prod/audit.log if [ -e $SEEK_FILE ] ; then SEEK=`head -1 $SEEK_FILE` else SEEK=0 echo "0" > $SEEK_FILE fi FILE_LINES=`wc -l $AUDIT_FILE | cut -d " " -f 1` if [ $FILE_LINES -lt $SEEK ] ; then # audit file has wrapped SEEK=0 fi awk -v START=$SEEK -v OUTPUT=$SEEK_FILE 'NR>START { print } END { print NR > OUTPUT }' $AUDIT_FILE | tee $TEE_DEST | /sbin/ausearch -i 2>/dev/null | grep -v "^----"       This inbuilt script is converting default format of linux audit logs by applying ausearch utility. example below:  Log input : type=TTY msg=audit(1647315634.249:442): tty pid=2962 uid=0 auid=1001 ses=1368 major=136 minor=0 comm="bash" data=7669202F6574632F727375737F7F79730963090D Log output after using rlog.sh type=TTY msg=audit(03/15/2022 14:40:34.791:2962): tty pid=2962 uid=root auid=root ses=1368 major=136 minor=0 comm="bash" data=7669202F6574632F727375737F7F79730963090D   Now I have audit.log being generated in a different format.. like below.. My audit.log: (custom audit.log format) IP: 10.200.30.40 | <158>Mar 11 16:10:24 xxx-yyy-zzz AuditLog type=SYSCALL msg=audit(1646979024.027:1697): arch=c000003e syscall=4 success=yes exit=0 a0=7f1304042410 a1=7f13092f66a0 a2=7f13092f66a0 a3=0 items=1 ppid=1 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="in:imfile" exe="/usr/sbin/rsyslogd" subj=system_u:system_r:syslogd_t:s0 key=(null) Hostname=10.200.30.40 (default audit.log format) type=USER_TTY msg=audit(1646592289.268:441): pid=2962 uid=0 auid=1001 ses=1368 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 data=73797374656D63746C2073746174757320727379736C6F67 Hostname=xxx so basically I will have logs of both default audit log format and this custom format being logged in audit.log. When I apply the rlog.sh/ausearch utility to this log, only logs with default audit.log type are being converted with ausearch utility and sent to output and indexed, the other logs are not even being sent to output. Please help.
Hi, The search on one of my dashboard panel is outputting log events ..... BUT it is still processing the search, even when I change the time range to just 1 minute or even 30 seconds. Is there any... See more...
Hi, The search on one of my dashboard panel is outputting log events ..... BUT it is still processing the search, even when I change the time range to just 1 minute or even 30 seconds. Is there any way I can get the search to complete as I need the option to export the panel's events to be available?   Thanks, Patrick
Hi  Splunkers ,  Can we use Splunk ITSI for health check of all network , endpoint devices etc .... or is there any app that is recommended for the health monitoring  usecase requirements. TIA ... See more...
Hi  Splunkers ,  Can we use Splunk ITSI for health check of all network , endpoint devices etc .... or is there any app that is recommended for the health monitoring  usecase requirements. TIA  
hello I use a transpose command in order to have _time field displayed in column instead row First question : how to delete the header? second question :   I was doing a color format... See more...
hello I use a transpose command in order to have _time field displayed in column instead row First question : how to delete the header? second question :   I was doing a color formatting like this <format type="color" field="Qualité"> <colorPalette type="list">[#53A051,#F1813F,#DC4E41]</colorPalette> <scale type="threshold">2,10</scale> </format>   Since use transpose, the formatting doesnt works what I have to do please?
Hi, I have setup a HEC input on a Heavy Forwarder and have a base app for all data outputs to forward to Splunk Cloud Indexers but not seeing the HEC data in Cloud. Am I missing out a particular se... See more...
Hi, I have setup a HEC input on a Heavy Forwarder and have a base app for all data outputs to forward to Splunk Cloud Indexers but not seeing the HEC data in Cloud. Am I missing out a particular setting that forwards HEC data Thanks,
HI, I have a search query as below     index= **** | stats avg(upstream_response_time), p95(upstream_response_time), p99(upstream_response_time) by service       It gives me results a... See more...
HI, I have a search query as below     index= **** | stats avg(upstream_response_time), p95(upstream_response_time), p99(upstream_response_time) by service       It gives me results as below...    i wanted to roundoff the decimal values to 2 digits for all column values. I tried something like this but it didnt give me any results, Can you please help me how can i trim the results to 2 digits.       index= **** | eval upstream_response_times = round(upstream_response_time,2) | stats avg(upstream_response_times), p95(upstream_response_times), p99(upstream_response_times) by service    
Hello working for a client that has some older infra still on Red Hat 6 (to be changed but this is not on my side and could happen far into the future). Kernel 2.6.32 (around that). We need to connec... See more...
Hello working for a client that has some older infra still on Red Hat 6 (to be changed but this is not on my side and could happen far into the future). Kernel 2.6.32 (around that). We need to connect these machines to Splunk Observability Cloud! Managed to overcome installation issues (some cheats here and there) and got RPM installed as you can see.      FATAL: kernel too old ... ... ... /usr/lib/splunk-otel-collector/agent-bundle/bin/patch-interpreter: line 15: 5818 Aborted (core dumped) ${tmproot%/}/bin/patchelf --set- [root@RHEL6 ~]# rpm -q splunk-otel-collector splunk-otel-collector-0.46.0-1.x86_64      But default conf was not created so I copied the same from RHEL7.     [root@localhost ~]# /usr/bin/otelcol --config /etc/otel/collector/splunk-otel-collector.conf 2022/03/18 06:07:28 main.go:263: Set config to /etc/otel/collector/splunk-otel-collector.conf 2022/03/18 06:07:28 main.go:346: Set ballast to 168 MiB 2022/03/18 06:07:28 main.go:360: Set memory limit to 460 MiB Error: failed to get config: cannot retrieve the configuration: unable to parse yaml: yaml: unmarshal errors: line 1: cannot unmarshal !!str `SPLUNK_...` into map[string]interface {} 2022/03/18 06:07:28 main.go:130: application run finished with error: failed to get config: cannot retrieve the configuration: unable to parse yaml: yaml: unmarshal errors: line 1: cannot unmarshal !!str `SPLUNK_...` into map[string]interface {} [root@localhost ~]# [root@localhost ~]# go version go version go1.18 linux/amd64     so all the files are in place, they look same as the ones on working system, file is read for sure as SPLUNK_ is the first value in the .conf (just proof that all the values, masked here for security of course, are in place).     [root@localhost ~]# head /etc/otel/collector/splunk-otel-collector.conf SPLUNK_CONFIG=/etc/otel/collector/agent_config.yaml SPLUNK_ACCESS_TOKEN=******* SPLUNK_REALM=us0 <and so on>     Any ideas how to overcome mapping those values properly? I dont know if other problems will not show in the next steps but I have to try. I know and acknowledge RHEL6 is not supported but the binary looks like launching and I have high hopes. TIA!
Hello. I'm wondering if there is a reasonable built-in way to get details of certificates used across the splunk environment. I have several indexers, some search-heads, many forwarders. And all ... See more...
Hello. I'm wondering if there is a reasonable built-in way to get details of certificates used across the splunk environment. I have several indexers, some search-heads, many forwarders. And all the traffic is encrypted and authenticated with certificates. With several dozens (or even hundreds) of certificates it's obviously hard to track by hand which certs expire when and I'm sometimes getting into an annoying situation when a UF stops forwarding becaues it's no longer authenticated by the HFs, because its cert had just expired. I could of course prepare a completely external script which would run, let's say, once a day, do quick scan over the config files, find the certificate-related directives, extract the relevant data from the certificates and report it into some file that I'd later ingest to splunk. Other approach would be to do without the intermediate log file as scripted input but it boils down to the same thing. It's possible but it's kinda inconvenient since I'd have to prepare two separate versions of such tool (one for linuxes, one for windows), I'd have to maintain such solution separately. And I'm not a big PowerShell pro so it'd be a bit of a challenge for me to prepare the script for windows. Hence the question if splunk can report such details on its own. I didn't find anything useful so far but maybe I missed something.
Team, Can you please help me with the splunk query for the below? Thank you Splunk query returns the below 1 1 1 2 2 2 3 1 1 3 1 How do I get the below( unique within each group)? 1 2... See more...
Team, Can you please help me with the splunk query for the below? Thank you Splunk query returns the below 1 1 1 2 2 2 3 1 1 3 1 How do I get the below( unique within each group)? 1 2 3 1