All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When exporting a PDF from the Splunk dashboard, I'm experiencing an issue where the graph appears to be truncated.Specifically, the PDF omits today's data from the graph, despite it being displayed c... See more...
When exporting a PDF from the Splunk dashboard, I'm experiencing an issue where the graph appears to be truncated.Specifically, the PDF omits today's data from the graph, despite it being displayed correctly on the Splunk portal.
I have below configurations in transforms and props config files to change the source name of my events from upd:9514 to auditd. But it doesn't seems to be working Transforms.conf [change_source... See more...
I have below configurations in transforms and props config files to change the source name of my events from upd:9514 to auditd. But it doesn't seems to be working Transforms.conf [change_source_to_auditd] SOURCE_KEY=MetaData:Source REGEX= . DEST_KEY=MetaData:Source FORMAT=source::auditd Props.conf Props.conf [source::udp:9514] TRANSFORMS-change_source=change_source_to_auditd     Below are the sample logs- Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=EOE msg=audit(1737619518.941:2165876): Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=PROCTITLE msg=audit(1737619518.941:2165876): proctitle=2F7573722F7362696E2F727379736C6F6764002D6E Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=SOCKADDR msg=audit(1737619518.941:2165876): saddr=020019727F0000010000000000000000 SADDR={ saddr_fam=inet laddr=127.0.0.1 lport=6514 } Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=SYSCALL msg=audit(1737619518.941:2165876): arch=c000003e syscall=42 success=yes exit=0 a0=f a1=7fedf8006c20 a2=10 a3=0 items=0 ppid=1 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=72733A6D61696E20513A526567 exe="/usr/sbin/rsyslogd" key="network_connect_4" ARCH=x86_64 SYSCALL=connect AUID="unset" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=CRED_DISP msg=audit(1737619560.680:2114873): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=CRED_REFR msg=audit(1737619560.577:2114872): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=USER_ACCT msg=audit(1737619560.577:2114871): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="telegraf" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EOE msg=audit(1737619560.577:2114870): Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PROCTITLE msg=audit(1737619560.577:2114870): proctitle=7375646F002F7573722F7362696E2F706D63002D75002D620031004745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.577:2114870): item=0 name="/etc/shadow" inode=132150 dev=fd:00 mode=0100000 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=SYSCALL msg=audit(1737619560.577:2114870): arch=c000003e syscall=257 success=yes exit=9 a0=ffffff9c a1=7fc1d61bbe1a a2=80000 a3=0 items=1 ppid=3709106 pid=3709107 auid=4294967295 uid=985 gid=985 euid=0 suid=0 fsuid=0 egid=985 sgid=985 fsgid=985 tty=(none) ses=4294967295 comm="sudo" exe="/usr/bin/sudo" key="etcpasswd" ARCH=x86_64 SYSCALL=openat AUID="unset" UID="telegraf" GID="telegraf" EUID="root" SUID="root" FSUID="root" EGID="telegraf" SGID="telegraf" FSGID="telegraf" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EOE msg=audit(1737619560.570:2114869): Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PROCTITLE msg=audit(1737619560.570:2114869): proctitle=7375646F002F7573722F7362696E2F706D63002D75002D620031004745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.570:2114869): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=397184 dev=fd:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.570:2114869): item=0 name="/usr/bin/sudo" inode=436693 dev=fd:00 mode=0104111 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EXECVE msg=audit(1737619560.570:2114869): argc=6 a0="sudo" a1="/usr/sbin/pmc" a2="-u" a3="-b" a4="1" a5=4745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=BPRM_FCAPS msg=audit(1737619560.570:2114869): fver=0 fp=0 fi=0 fe=0 old_pp=00000000000000c2 old_pi=00000000000000c2 old_pe=00000000000000c2 old_pa=00000000000000c2 pp=00000000200000c2 pi=00000000000000c2 pe=00000000200000c2 pa=0 frootid=0 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=SYSCALL msg=audit(1737619560.570:2114869): arch=c000003e syscall=59 success=yes exit=0 a0=7fe718b344a0 a1=7fe7186addb0 a2=7ffcc797d010 a3=3 items=2 ppid=3709106 pid=3709107 auid=4294967295 uid=985 gid=985 euid=0 suid=0 fsuid=0 egid=985 sgid=985 fsgid=985 tty=(none) ses=4294967295 comm="sudo" exe="/usr/bin/sudo" key="priv_esc" ARCH=x86_64 SYSCALL=execve AUID="unset" UID="telegraf" GID="telegraf" EUID="root" SUID="root" FSUID="root" EGID="telegraf" SGID="telegraf" FSGID="telegraf"
I've tried to test this, but it did not work for me. The whole search was blocked and did not return any data.  No need to dig in further here, as I had anyway to turn upside down the whole dashboa... See more...
I've tried to test this, but it did not work for me. The whole search was blocked and did not return any data.  No need to dig in further here, as I had anyway to turn upside down the whole dashboard to solve performance issues. This turning upside down has also solved the issue discussed in here.  
I know this is an old post - but I also had this issue because the app creation in Splunk on-prem 9.2.1 puts the icon in the wrong place.  I opened up the browser inspector, turned off the cache, and... See more...
I know this is an old post - but I also had this issue because the app creation in Splunk on-prem 9.2.1 puts the icon in the wrong place.  I opened up the browser inspector, turned off the cache, and watched for the png requests. From there I saw it was trying to get appname/static/appIconAlt_2x.png. I made the png at that location and I can see the preview now. Happy Splunking!
Hi,  I am in need of creating a user account that has no access at all to the dashboards. The only purpose of the account is to run scheduled searches through a rest API. Does anyone know if its pos... See more...
Hi,  I am in need of creating a user account that has no access at all to the dashboards. The only purpose of the account is to run scheduled searches through a rest API. Does anyone know if its possible to create such an account?
Hello @everyone, We have onboarded logs using add-on "Splunk Add-on for Microsoft SQL Server".  We have logs available for multiple source type.  For one KPI named created by me, "SQL Query Last ... See more...
Hello @everyone, We have onboarded logs using add-on "Splunk Add-on for Microsoft SQL Server".  We have logs available for multiple source type.  For one KPI named created by me, "SQL Query Last Elapsed Time" we have multiple SQL queries\Stored procedure showing in the entities list. Here we want to set threshold for each entity(SQL Query\Stored procedure) I did try myself but did not find solution yet. Please help on this. Thanks a lot!
Thanks for the observation. The problem is that, even if I comment the highlighted row, the event of clicking the submit button works only the first time I click, the following times it doesn't event... See more...
Thanks for the observation. The problem is that, even if I comment the highlighted row, the event of clicking the submit button works only the first time I click, the following times it doesn't event print "CLICKED". I'm more interested in solving that issue, because I don't understand why it is working like that.
currently we have Splunk enterprise 9.1.4 with 1 deployment server, 1 deployer (SH cluster), 2 cluster managers, 6 indexers (2 in each site), and 3 SHs (1 in each site), basically a 3 site cluster. ... See more...
currently we have Splunk enterprise 9.1.4 with 1 deployment server, 1 deployer (SH cluster), 2 cluster managers, 6 indexers (2 in each site), and 3 SHs (1 in each site), basically a 3 site cluster. SHCD (Deployer) acts as license master for us. Please help me with how to renew the license from the file which I will receive from my management? and do we need to push it to all other nodes or is it already configured? where to check is it configured or not? how to check is renewal successful or not?
Again, your words don't quite match your expected output, however, does this work for you? | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: Sys... See more...
Again, your words don't quite match your expected output, however, does this work for you? | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null Exception message - CSR-a4cd725c-3d73-426c-b254-5e4f4adc4b26 - Generating exception because of multiple stage failure - abc_ELIGIBILITY 0013c5fb1737577541466 - Exception message - 0013c5fb1737577541466 - Generating exception because of multiple stage failure - abc_ELIGIBILITY b187c4411737535464656 - Exception message - b187c4411737535464656 - Exception in abc module. Creating error response - b187c4411737535464656 - Response creation couldn't happen for all the placements. Creating error response." | rex field=raw max_match=0 "(\b)(?<words>[A-Za-z'_]+)(\b|$)" | eval words = mvjoin(words, " ")
Hello, I am working with Splunk Security Essentials, and in the Analytics Advisor, there is a MITRE ATT&CK Framework dashboard which is not being populated, as can be seen on the screenshot, despite ... See more...
Hello, I am working with Splunk Security Essentials, and in the Analytics Advisor, there is a MITRE ATT&CK Framework dashboard which is not being populated, as can be seen on the screenshot, despite finishing the Data Inventory Introspection, and in other places I can see the data exists. Data models are also populated but most are not accelerated except of Authentication data model. This is a production environment and definitely has data. There should be some "Available" content there.
Actually did some further testing, but users with admin privileges seem to be immune to permissions in terms of editing apps. So for now there is no way to disallow admins to write to apps.
Yes seems that there is only a workaround available by using a non-used role. Although I do not know if this would in fact create issues up the road, we'll see!
Hi @Cheng2Ready , I agree with @isoutamo , to help you s mandatory your code. Anyway, the eval command that you need to create the host variable value from the two tokens, has a different syntax: ... See more...
Hi @Cheng2Ready , I agree with @isoutamo , to help you s mandatory your code. Anyway, the eval command that you need to create the host variable value from the two tokens, has a different syntax: | eval host=$server$."-".$appnumber$ in addition remember that the "-" char in Splunk means minus, so your eval understand that it must subtract $appnumber$ from $server$. Ciao. Giuseppe
A small update to these link (the former is a repost of the latter). The former link/document was moved to https://www.invictus-ir.com/news/importing-windows-event-log-files-into-splunk 
Hi @rahulkumar , logstash is a log concentrator, so, probably, from logstash youre receiving logs of different sourcetypes (e.g. linux, firewall, routers, switches, etc...). After extracting metada... See more...
Hi @rahulkumar , logstash is a log concentrator, so, probably, from logstash youre receiving logs of different sourcetypes (e.g. linux, firewall, routers, switches, etc...). After extracting metadata, you have to recover the raw event and assign to each kind of log the sourcetype to use in the related add-ons, e.g. linux logs must be assigned to the sourcetype linux_secure, linux_audit, and so on. These sourcetypes are the ones from the add-on Splunk Add-on for Linux and Unix that you can download from Splunkbase. Ciao. Giuseppe
Thank you for the link(s). Would be great if Splunk had included this important bit of information in their docs...
Hi @greenpebble , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has recei... See more...
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has received updates on the following dates: 19 Nov 2024 20 Nov 2024 26 Nov 2024 2 Dec 2024 3 Dec 2024 10 Dec 2024 17 Dec 2024 21 Jan 2025 However, when I search in Splunk within the same timeframe (1 Nov 2024 to 23 Jan 2025), the logs only show updates on: 3 Dec 2024 10 Dec 2024 17 Dec 2024 I aim to establish a rule that triggers a notification if there has been no update for 20 days or more. Regrettably, despite the Sophos console indicating recent updates, the discrepancies in Splunk raise concerns about accurate monitoring. I have verified the settings under Indexing > Indexes and Volumes in Splunk, and everything appears to be configured correctly. Could anyone provide insights on how to track and resolve this discrepancy? Thank you for your assistance.
Since there is no such setting for indexes.conf there are two possible reasons. 1. Less likely - you have this setting set somewhere. Look for it with either find | grep or with splunk btool and rem... See more...
Since there is no such setting for indexes.conf there are two possible reasons. 1. Less likely - you have this setting set somewhere. Look for it with either find | grep or with splunk btool and remove 2. More likely - you hit some frontend issue and coldPath_expanded is a variable existing only on your browser's side for some strange reason. In such case it's probably a support case material.
1. As @VatsalJagani already pointed out - mongodb is an integral part of Splunk distribution and Splunk relies on it to work properly. Therefore changing its configuration is not recommended and you'... See more...
1. As @VatsalJagani already pointed out - mongodb is an integral part of Splunk distribution and Splunk relies on it to work properly. Therefore changing its configuration is not recommended and you're very likely to cause problems if you're changing things without deep understanding of their impact for the whole environment. 2. Baseline checks, vulnerability scans and such are just tools to help you assess the state of the system, not do the job for you. They alone are not sufficient grounds for telling you what is OK and what is not. Running them blindly and following their "recommendations" without understanding the results of performed tests and their context is not a good practice.