All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Please help with the below.  I am using rlog.sh (inbuilt script) provided by Splunk in TA-unix package , to apply ausearch utility for linux audit logs.     SEEK_FILE=$SPLUNK_HOME/var... See more...
Hi all, Please help with the below.  I am using rlog.sh (inbuilt script) provided by Splunk in TA-unix package , to apply ausearch utility for linux audit logs.     SEEK_FILE=$SPLUNK_HOME/var/run/splunk/unix_audit_seekfile_model_prod AUDIT_FILE=/opt/splunklogs_app/audit_prod/audit.log if [ -e $SEEK_FILE ] ; then SEEK=`head -1 $SEEK_FILE` else SEEK=0 echo "0" > $SEEK_FILE fi FILE_LINES=`wc -l $AUDIT_FILE | cut -d " " -f 1` if [ $FILE_LINES -lt $SEEK ] ; then # audit file has wrapped SEEK=0 fi awk -v START=$SEEK -v OUTPUT=$SEEK_FILE 'NR>START { print } END { print NR > OUTPUT }' $AUDIT_FILE | tee $TEE_DEST | /sbin/ausearch -i 2>/dev/null | grep -v "^----"       This inbuilt script is converting default format of linux audit logs by applying ausearch utility. example below:  Log input : type=TTY msg=audit(1647315634.249:442): tty pid=2962 uid=0 auid=1001 ses=1368 major=136 minor=0 comm="bash" data=7669202F6574632F727375737F7F79730963090D Log output after using rlog.sh type=TTY msg=audit(03/15/2022 14:40:34.791:2962): tty pid=2962 uid=root auid=root ses=1368 major=136 minor=0 comm="bash" data=7669202F6574632F727375737F7F79730963090D   Now I have audit.log being generated in a different format.. like below.. My audit.log: (custom audit.log format) IP: 10.200.30.40 | <158>Mar 11 16:10:24 xxx-yyy-zzz AuditLog type=SYSCALL msg=audit(1646979024.027:1697): arch=c000003e syscall=4 success=yes exit=0 a0=7f1304042410 a1=7f13092f66a0 a2=7f13092f66a0 a3=0 items=1 ppid=1 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="in:imfile" exe="/usr/sbin/rsyslogd" subj=system_u:system_r:syslogd_t:s0 key=(null) Hostname=10.200.30.40 (default audit.log format) type=USER_TTY msg=audit(1646592289.268:441): pid=2962 uid=0 auid=1001 ses=1368 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 data=73797374656D63746C2073746174757320727379736C6F67 Hostname=xxx so basically I will have logs of both default audit log format and this custom format being logged in audit.log. When I apply the rlog.sh/ausearch utility to this log, only logs with default audit.log type are being converted with ausearch utility and sent to output and indexed, the other logs are not even being sent to output. Please help.
Hi, The search on one of my dashboard panel is outputting log events ..... BUT it is still processing the search, even when I change the time range to just 1 minute or even 30 seconds. Is there any... See more...
Hi, The search on one of my dashboard panel is outputting log events ..... BUT it is still processing the search, even when I change the time range to just 1 minute or even 30 seconds. Is there any way I can get the search to complete as I need the option to export the panel's events to be available?   Thanks, Patrick
Hi  Splunkers ,  Can we use Splunk ITSI for health check of all network , endpoint devices etc .... or is there any app that is recommended for the health monitoring  usecase requirements. TIA ... See more...
Hi  Splunkers ,  Can we use Splunk ITSI for health check of all network , endpoint devices etc .... or is there any app that is recommended for the health monitoring  usecase requirements. TIA  
hello I use a transpose command in order to have _time field displayed in column instead row First question : how to delete the header? second question :   I was doing a color format... See more...
hello I use a transpose command in order to have _time field displayed in column instead row First question : how to delete the header? second question :   I was doing a color formatting like this <format type="color" field="Qualité"> <colorPalette type="list">[#53A051,#F1813F,#DC4E41]</colorPalette> <scale type="threshold">2,10</scale> </format>   Since use transpose, the formatting doesnt works what I have to do please?
Hi, I have setup a HEC input on a Heavy Forwarder and have a base app for all data outputs to forward to Splunk Cloud Indexers but not seeing the HEC data in Cloud. Am I missing out a particular se... See more...
Hi, I have setup a HEC input on a Heavy Forwarder and have a base app for all data outputs to forward to Splunk Cloud Indexers but not seeing the HEC data in Cloud. Am I missing out a particular setting that forwards HEC data Thanks,
HI, I have a search query as below     index= **** | stats avg(upstream_response_time), p95(upstream_response_time), p99(upstream_response_time) by service       It gives me results a... See more...
HI, I have a search query as below     index= **** | stats avg(upstream_response_time), p95(upstream_response_time), p99(upstream_response_time) by service       It gives me results as below...    i wanted to roundoff the decimal values to 2 digits for all column values. I tried something like this but it didnt give me any results, Can you please help me how can i trim the results to 2 digits.       index= **** | eval upstream_response_times = round(upstream_response_time,2) | stats avg(upstream_response_times), p95(upstream_response_times), p99(upstream_response_times) by service    
Hello working for a client that has some older infra still on Red Hat 6 (to be changed but this is not on my side and could happen far into the future). Kernel 2.6.32 (around that). We need to connec... See more...
Hello working for a client that has some older infra still on Red Hat 6 (to be changed but this is not on my side and could happen far into the future). Kernel 2.6.32 (around that). We need to connect these machines to Splunk Observability Cloud! Managed to overcome installation issues (some cheats here and there) and got RPM installed as you can see.      FATAL: kernel too old ... ... ... /usr/lib/splunk-otel-collector/agent-bundle/bin/patch-interpreter: line 15: 5818 Aborted (core dumped) ${tmproot%/}/bin/patchelf --set- [root@RHEL6 ~]# rpm -q splunk-otel-collector splunk-otel-collector-0.46.0-1.x86_64      But default conf was not created so I copied the same from RHEL7.     [root@localhost ~]# /usr/bin/otelcol --config /etc/otel/collector/splunk-otel-collector.conf 2022/03/18 06:07:28 main.go:263: Set config to /etc/otel/collector/splunk-otel-collector.conf 2022/03/18 06:07:28 main.go:346: Set ballast to 168 MiB 2022/03/18 06:07:28 main.go:360: Set memory limit to 460 MiB Error: failed to get config: cannot retrieve the configuration: unable to parse yaml: yaml: unmarshal errors: line 1: cannot unmarshal !!str `SPLUNK_...` into map[string]interface {} 2022/03/18 06:07:28 main.go:130: application run finished with error: failed to get config: cannot retrieve the configuration: unable to parse yaml: yaml: unmarshal errors: line 1: cannot unmarshal !!str `SPLUNK_...` into map[string]interface {} [root@localhost ~]# [root@localhost ~]# go version go version go1.18 linux/amd64     so all the files are in place, they look same as the ones on working system, file is read for sure as SPLUNK_ is the first value in the .conf (just proof that all the values, masked here for security of course, are in place).     [root@localhost ~]# head /etc/otel/collector/splunk-otel-collector.conf SPLUNK_CONFIG=/etc/otel/collector/agent_config.yaml SPLUNK_ACCESS_TOKEN=******* SPLUNK_REALM=us0 <and so on>     Any ideas how to overcome mapping those values properly? I dont know if other problems will not show in the next steps but I have to try. I know and acknowledge RHEL6 is not supported but the binary looks like launching and I have high hopes. TIA!
Hello. I'm wondering if there is a reasonable built-in way to get details of certificates used across the splunk environment. I have several indexers, some search-heads, many forwarders. And all ... See more...
Hello. I'm wondering if there is a reasonable built-in way to get details of certificates used across the splunk environment. I have several indexers, some search-heads, many forwarders. And all the traffic is encrypted and authenticated with certificates. With several dozens (or even hundreds) of certificates it's obviously hard to track by hand which certs expire when and I'm sometimes getting into an annoying situation when a UF stops forwarding becaues it's no longer authenticated by the HFs, because its cert had just expired. I could of course prepare a completely external script which would run, let's say, once a day, do quick scan over the config files, find the certificate-related directives, extract the relevant data from the certificates and report it into some file that I'd later ingest to splunk. Other approach would be to do without the intermediate log file as scripted input but it boils down to the same thing. It's possible but it's kinda inconvenient since I'd have to prepare two separate versions of such tool (one for linuxes, one for windows), I'd have to maintain such solution separately. And I'm not a big PowerShell pro so it'd be a bit of a challenge for me to prepare the script for windows. Hence the question if splunk can report such details on its own. I didn't find anything useful so far but maybe I missed something.
Team, Can you please help me with the splunk query for the below? Thank you Splunk query returns the below 1 1 1 2 2 2 3 1 1 3 1 How do I get the below( unique within each group)? 1 2... See more...
Team, Can you please help me with the splunk query for the below? Thank you Splunk query returns the below 1 1 1 2 2 2 3 1 1 3 1 How do I get the below( unique within each group)? 1 2 3 1  
Hi All, for this year .conf 22 registratations are open  and I see registation fee while signing up with personal account. is .conf 22 registration is free for Splunk partner companies?. 
hello I use appdncols command in order to aggregate in a table the result of different search I have 2 issues with the 3 fields In yellow Issue 1 If dont use the piece of code below, the... See more...
hello I use appdncols command in order to aggregate in a table the result of different search I have 2 issues with the 3 fields In yellow Issue 1 If dont use the piece of code below, the field "Tea" is not displayed (same thing for INC & OUT)       | appendpipe [ stats count as _events | where _events = 0 | eval "Tea"= 0]]       Issue 2 the appendpipe command put only "0" in the first line but not in other Here is the search :       | appendcols [ search index=titi earliest=@d+7h latest=@d+19h | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats dc(Tea) as Tea by time | rename time as Heure | appendpipe [ stats count as _events | where _events = 0 | eval Tea= 0] ] | appendcols [ search index=tutu earliest=@d+7h latest=@d+19h | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats dc(s) as "OUT" by time | rename time as Heure | appendpipe [ stats count as _events | where _events = 0 | eval "OUT"= 0]]       What is wrong please? And I have something else strange As you can, the the results is 0, the results is ususally displayed But why sometimes I have an empty field instaed 0 like in yellow? Is anybody can give the solution for displaying the results in any case when the value is 0?  
  The below table is for one User, like wise I have to pull the details for many users - who visited multiple url on different timestamp, I am trying to calculate the total duration between each url... See more...
  The below table is for one User, like wise I have to pull the details for many users - who visited multiple url on different timestamp, I am trying to calculate the total duration between each url/E to url/J.  So what I am trying to achieve is whenever the user is visiting url/E and traversing till url/J - calculate the total duration. I trying using transaction command but it only calculates the last event of url/E and url/J USER_ID.                         TIMESTAMP.                  URL CD_125 05:30:36 URL/E CD_!25 05:30:38 URL/F CD_125 05:30:39 URL/H CD_125 05:30:41 URL/J CD_125 05:30:43 URL/E CD_125 05:30:44 URL/I CD_125 05:30:45 URL/J   what I am looking here is duration for each URL/E to URL/J . The output what I am expecting is this. User_ID                            Duration            URL CD_125 5 url/E url/F url/H url/J CD_125 2 url/E url/I url/J   would appreciate if someone could guide and help me with the query. thanks 
I need some help to check configure send email, and I still have not received the email alert in my mailbox. The alert is already triggered as I can see that in the "triggered alerts" section. when ... See more...
I need some help to check configure send email, and I still have not received the email alert in my mailbox. The alert is already triggered as I can see that in the "triggered alerts" section. when i configure like this,and saved. then i open again, username,passward is gone,  
When trying to enable aws_description_tasks, I'm finding it in the logs that it is erroring out due to 'Connection reset by peer', indicating that this is due to a firewall error in my network. I can... See more...
When trying to enable aws_description_tasks, I'm finding it in the logs that it is erroring out due to 'Connection reset by peer', indicating that this is due to a firewall error in my network. I can ask the networking team to allow traffic from this endpoint, but I'm unsure as to what the url is. Is there any way I can go about finding the endpoint or url that aws_description_tasks uses to grab metadata from AWS? Not sure if it's simply 169.254.169.254 but would like to know how I can go about finding the endpoints used by the different aws inputs. 
Hi Guys, I was tasked with building a configuration register and define the processes for Splunk for my organization.  Could someone help me with an example ? Thank you  
Hello Folks, I have the below query on one of my dashboard panel. Here I pass the IN_BUSINESSDATE field value from dashboard (form input) with default as % and prefix & sufix value as %. So incas... See more...
Hello Folks, I have the below query on one of my dashboard panel. Here I pass the IN_BUSINESSDATE field value from dashboard (form input) with default as % and prefix & sufix value as %. So incase user does not provide, the query gets IN_BUSINESSDATE as %%% (its ok) index=dockerlogs  | search app_name = ABCD AND logEvent="Delivered" | spath input=businessKey path=businessDate output=businessDate | spath input=businessKey output=sourceSystem path=sourceSystem | eval businessDate=substr(businessDate,1,10) | where like(businessDate, "$IN_BUSINESSDATE$") | stats count by businessDate, sourceSystem Now I would like to change the stats on the query as below if IN_BUSINESSDATE is not provided (meaning value is %%%) | stats count by sourceSystem How can I achieve this ? Thank you!
I am looking for a way to check for multiple conditions to match, and if they are met, output a specific word... such as "true". Example: my_cool_search_here | eval condition_met=if(user=* AND Do... See more...
I am looking for a way to check for multiple conditions to match, and if they are met, output a specific word... such as "true". Example: my_cool_search_here | eval condition_met=if(user=* AND DoW IN (Mon,Wed) AND HoD IN (01,02,03) AND hostname IN ("hostname.hostdomain","hostname.hostdomain"), "true") I don't know if that makes sense... but essentially I want to check whether "user" has ANY value, and then if the fields "DoW", "HoD", and "hostname" have specific values out of a possible range.... and if all that matches, then set the value of "condition_met" to "true". I know I can do this for a single field/value, but how would I accomplish this for multiple different conditions? Thanks!
Hi All, having an issue where report acceleration is not working for non-admin roles. Report is accelerating correctly when running under the admin user and 'Using summaries for search' is found unde... See more...
Hi All, having an issue where report acceleration is not working for non-admin roles. Report is accelerating correctly when running under the admin user and 'Using summaries for search' is found under the job inspector. When running the same report under other users, report will not load over certain time periods and does not show this same 'Using summaries for search' confirmation in the job inspector. Things I have tried for other role in question: - Confirmed scheduled_search and accelerated_search capabilities are enabled - Confirmed user has write access to the report - Confirmed report is in shared app which  the user has access to - Tried various other capabilities and inheritance from power user role There is over 26 million events being matched, is there a chance of this role hitting a limit which is preventing the accelerated search functionality? Let me know if you need any more information.
Hello all. Thanks in advance for your assistance. I have a 6 node index cluster with a search factor of 6 and replication factor of 3. My ingest is 130GB per/day for proxy data My Hardware: 7TB ho... See more...
Hello all. Thanks in advance for your assistance. I have a 6 node index cluster with a search factor of 6 and replication factor of 3. My ingest is 130GB per/day for proxy data My Hardware: 7TB hot/warm SSD and 112TB cold 10k spindle per/indexer I have a requirement for 30 days hot/warm and 1,065 days cold = 1,095 days (3 years) total Calculations I have found say: (Daily Avg. Ingest x Compression Rate x Num Days Retention) / # of Indexers So: Hot/Warm: 130GB * .6 compression * 15 days) / 6 = 390 GB p/indexer Cold: 130GB * .6 compression * 1,065 days) / 6 = 13,845 GB p/indexer Total: 130GB * .6 compression * 1,095 days) / 6 = 14,235 GB p/indexer Given that, I think my indexes.conf would have: [idx_proxy] homePath = volume:primary/idx_proxy/db coldPath = volume:secondary/idx_proxy/colddb thawedPath = $SPLUNK_DB/idx_proxy/thaweddb maxTotalDataSizeMB = 14576640 maxDataSize = auto_high_volume homePath.maxDataSizeMB = 399360 frozenTimePeriodInSecs = 94608000 The question: With replication factor, I'm thinking this answer is not complete. Can anyone help? Updated formula/calc? This is current storage: [splunk@splunkidx1 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 947G /splunkData/hot/idx_proxy/ [splunk@splunkidx2 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 826G /splunkData/hot/idx_proxy/ [splunk@splunkidx3 ~]$ 7.9T /splunkData/cold/idx_proxy/ 955G /splunkData/hot/idx_proxy/ [splunk@splunkidx4 ~]$ du -sh /splunkData/*/idx_proxy/ 7.8T /splunkData/cold/idx_proxy/ 952G /splunkData/hot/idx_proxy/ [splunk@splunkidx5 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 936G /splunkData/hot/idx_proxy/ [splunk@splunkidx6 ~]$ du -sh /splunkData/*/idx_proxy/ 7.8T /splunkData/cold/idx_proxy/ 911G /splunkData/hot/idx_proxy/
How can I include several unique IP address in the search command with src=  or can I use src IN(ip,ip,ip)