All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical us... See more...
Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical user to be used is managed by our Salesforce Admin. My understanding is, that either one of us needs to have the capabilities as an admin on both instances to make it work? What we tried: Configuration of the user on Salesforce-side from another account, which is an Salesforce admin. Configuration of the add-on on Splunk-side with my admin account. The redirect link has been added to Salesforce, I tried to setup the add-on on Splunk as explained in the documentation of add-on for salesforce, but an error occurs after trying to connect them. Another hindrance is the use of LDAP. To make it work, I would need to give Salesforce admin Splunk admin capabilities or the other way around, I would need to get Salesforce admin rights. But that is something we do not want as the capabilities should remain as it is: Splunk for Splunk, Salesforce for Salesforce. Is there any other way to make it work, with the use of a technical user? Or is it just not possible with OAuth?   Best regards
Hi, I am currently working on an Adaptive Response that notifies us whenever there is a Notable in our queue of a certain urgency. The notification must include rule title and its configured urgency... See more...
Hi, I am currently working on an Adaptive Response that notifies us whenever there is a Notable in our queue of a certain urgency. The notification must include rule title and its configured urgency. I've been trying to solve this with the Add-On Builder but so far only managed to pull the rule title via helper.settings.get("search_name"). I tried to get the urgency with get_events() but that only seems to contain the details of the correlation search. Does anyone have a pointer of what Im missing? 
When exporting a PDF from the Splunk dashboard, I'm experiencing an issue where the graph appears to be truncated.Specifically, the PDF omits today's data from the graph, despite it being displayed c... See more...
When exporting a PDF from the Splunk dashboard, I'm experiencing an issue where the graph appears to be truncated.Specifically, the PDF omits today's data from the graph, despite it being displayed correctly on the Splunk portal.
I have below configurations in transforms and props config files to change the source name of my events from upd:9514 to auditd. But it doesn't seems to be working Transforms.conf [change_source... See more...
I have below configurations in transforms and props config files to change the source name of my events from upd:9514 to auditd. But it doesn't seems to be working Transforms.conf [change_source_to_auditd] SOURCE_KEY=MetaData:Source REGEX= . DEST_KEY=MetaData:Source FORMAT=source::auditd Props.conf Props.conf [source::udp:9514] TRANSFORMS-change_source=change_source_to_auditd     Below are the sample logs- Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=EOE msg=audit(1737619518.941:2165876): Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=PROCTITLE msg=audit(1737619518.941:2165876): proctitle=2F7573722F7362696E2F727379736C6F6764002D6E Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=SOCKADDR msg=audit(1737619518.941:2165876): saddr=020019727F0000010000000000000000 SADDR={ saddr_fam=inet laddr=127.0.0.1 lport=6514 } Jan 23 19:06:00 172.28.100.238 Jan 23 08:05:18 LIDFP3NTF001.li.local audispd: type=SYSCALL msg=audit(1737619518.941:2165876): arch=c000003e syscall=42 success=yes exit=0 a0=f a1=7fedf8006c20 a2=10 a3=0 items=0 ppid=1 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=72733A6D61696E20513A526567 exe="/usr/sbin/rsyslogd" key="network_connect_4" ARCH=x86_64 SYSCALL=connect AUID="unset" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=CRED_DISP msg=audit(1737619560.680:2114873): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=CRED_REFR msg=audit(1737619560.577:2114872): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=USER_ACCT msg=audit(1737619560.577:2114871): pid=3709107 uid=985 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="telegraf" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' UID="telegraf" AUID="unset" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EOE msg=audit(1737619560.577:2114870): Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PROCTITLE msg=audit(1737619560.577:2114870): proctitle=7375646F002F7573722F7362696E2F706D63002D75002D620031004745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.577:2114870): item=0 name="/etc/shadow" inode=132150 dev=fd:00 mode=0100000 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=SYSCALL msg=audit(1737619560.577:2114870): arch=c000003e syscall=257 success=yes exit=9 a0=ffffff9c a1=7fc1d61bbe1a a2=80000 a3=0 items=1 ppid=3709106 pid=3709107 auid=4294967295 uid=985 gid=985 euid=0 suid=0 fsuid=0 egid=985 sgid=985 fsgid=985 tty=(none) ses=4294967295 comm="sudo" exe="/usr/bin/sudo" key="etcpasswd" ARCH=x86_64 SYSCALL=openat AUID="unset" UID="telegraf" GID="telegraf" EUID="root" SUID="root" FSUID="root" EGID="telegraf" SGID="telegraf" FSGID="telegraf" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EOE msg=audit(1737619560.570:2114869): Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PROCTITLE msg=audit(1737619560.570:2114869): proctitle=7375646F002F7573722F7362696E2F706D63002D75002D620031004745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.570:2114869): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=397184 dev=fd:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=PATH msg=audit(1737619560.570:2114869): item=0 name="/usr/bin/sudo" inode=436693 dev=fd:00 mode=0104111 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 OUID="root" OGID="root" Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=EXECVE msg=audit(1737619560.570:2114869): argc=6 a0="sudo" a1="/usr/sbin/pmc" a2="-u" a3="-b" a4="1" a5=4745542054494D455F5354415455535F4E50 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=BPRM_FCAPS msg=audit(1737619560.570:2114869): fver=0 fp=0 fi=0 fe=0 old_pp=00000000000000c2 old_pi=00000000000000c2 old_pe=00000000000000c2 old_pa=00000000000000c2 pp=00000000200000c2 pi=00000000000000c2 pe=00000000200000c2 pa=0 frootid=0 Jan 23 19:06:00 172.28.100.238 Jan 23 08:06:00 LIDFP3NTF002.li.local audispd: type=SYSCALL msg=audit(1737619560.570:2114869): arch=c000003e syscall=59 success=yes exit=0 a0=7fe718b344a0 a1=7fe7186addb0 a2=7ffcc797d010 a3=3 items=2 ppid=3709106 pid=3709107 auid=4294967295 uid=985 gid=985 euid=0 suid=0 fsuid=0 egid=985 sgid=985 fsgid=985 tty=(none) ses=4294967295 comm="sudo" exe="/usr/bin/sudo" key="priv_esc" ARCH=x86_64 SYSCALL=execve AUID="unset" UID="telegraf" GID="telegraf" EUID="root" SUID="root" FSUID="root" EGID="telegraf" SGID="telegraf" FSGID="telegraf"
Hi,  I am in need of creating a user account that has no access at all to the dashboards. The only purpose of the account is to run scheduled searches through a rest API. Does anyone know if its pos... See more...
Hi,  I am in need of creating a user account that has no access at all to the dashboards. The only purpose of the account is to run scheduled searches through a rest API. Does anyone know if its possible to create such an account?
Hello @everyone, We have onboarded logs using add-on "Splunk Add-on for Microsoft SQL Server".  We have logs available for multiple source type.  For one KPI named created by me, "SQL Query Last ... See more...
Hello @everyone, We have onboarded logs using add-on "Splunk Add-on for Microsoft SQL Server".  We have logs available for multiple source type.  For one KPI named created by me, "SQL Query Last Elapsed Time" we have multiple SQL queries\Stored procedure showing in the entities list. Here we want to set threshold for each entity(SQL Query\Stored procedure) I did try myself but did not find solution yet. Please help on this. Thanks a lot!
currently we have Splunk enterprise 9.1.4 with 1 deployment server, 1 deployer (SH cluster), 2 cluster managers, 6 indexers (2 in each site), and 3 SHs (1 in each site), basically a 3 site cluster. ... See more...
currently we have Splunk enterprise 9.1.4 with 1 deployment server, 1 deployer (SH cluster), 2 cluster managers, 6 indexers (2 in each site), and 3 SHs (1 in each site), basically a 3 site cluster. SHCD (Deployer) acts as license master for us. Please help me with how to renew the license from the file which I will receive from my management? and do we need to push it to all other nodes or is it already configured? where to check is it configured or not? how to check is renewal successful or not?
Hello, I am working with Splunk Security Essentials, and in the Analytics Advisor, there is a MITRE ATT&CK Framework dashboard which is not being populated, as can be seen on the screenshot, despite ... See more...
Hello, I am working with Splunk Security Essentials, and in the Analytics Advisor, there is a MITRE ATT&CK Framework dashboard which is not being populated, as can be seen on the screenshot, despite finishing the Data Inventory Introspection, and in other places I can see the data exists. Data models are also populated but most are not accelerated except of Authentication data model. This is a production environment and definitely has data. There should be some "Available" content there.
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has recei... See more...
I am encountering an issue regarding the synchronization of update logs between Sophos and Splunk for a specific host, designated as "EXAMPLE01." According to the Sophos console, the device has received updates on the following dates: 19 Nov 2024 20 Nov 2024 26 Nov 2024 2 Dec 2024 3 Dec 2024 10 Dec 2024 17 Dec 2024 21 Jan 2025 However, when I search in Splunk within the same timeframe (1 Nov 2024 to 23 Jan 2025), the logs only show updates on: 3 Dec 2024 10 Dec 2024 17 Dec 2024 I aim to establish a rule that triggers a notification if there has been no update for 20 days or more. Regrettably, despite the Sophos console indicating recent updates, the discrepancies in Splunk raise concerns about accurate monitoring. I have verified the settings under Indexing > Indexes and Volumes in Splunk, and everything appears to be configured correctly. Could anyone provide insights on how to track and resolve this discrepancy? Thank you for your assistance.
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation p... See more...
Hi Team, I am working with Splunk version 7.3.2, and I would like to add a custom field called jira_ticket to notable events. The goal is to initially populate this field during the event creation process and later update its value via the API as the ticket progresses through different stages in Jira. Here are my key questions: What is the best way to add a custom field like jira_ticket to notable events? Are there specific configurations or updates needed in correlation searches or incident review settings? How can I reliably update this field through the API after it has been created? Are there any specific endpoints or parameters I need to be aware of? Since I am using an older Splunk version (7.3.2), are there any limitations or additional considerations I should keep in mind while implementing this? If anyone has successfully implemented a similar setup or can point me toward documentation, examples, or best practices, I’d greatly appreciate your input. Thank you in advance!  
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_fil... See more...
Hi I recently upgraded from Splunk 9.1.5 to 9.3.2. In the 9.1.5 Splunk/bin/scripts directory, there were .sh and .py files, but after upgrading to 9.3.2, they were moved to the splunk/quarantine_file path. Why were the files moved? Was the files in the splunk/bin/scripts path only moved?
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default op... See more...
Running a clean install on RHEL 8.9, kernel version 4.18.0-553.34.1.el8_10.x86_64. Followed the instructions on the install page for the soar-prepare-system command, not running clustered, default options for everything, created the phantom user with no trouble. /opt/splunk-soar is owned by phantom, ran the soar-install command as phantom, got through everything fine until the GitRepos step, hit this error: "INSTALL: GitRepos Configuring default playbook repos Failed to bootstrap playbook repos Install failed." Detailed error logs look kind of ugly, but seeing this: File \"/opt/splunk-soar/usr/python39/lib/python3.9/site-packages/git/cmd.py\", line 1388, in execute", " raise GitCommandError(redacted_command, status, stderr_value, stdout_value)", "git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)", " cmdline: git ls-remote --heads https://github.com/phantomcyber/playbooks", " stderr: 'fatal: unable to access 'https://github.com/phantomcyber/playbooks/': SSL certificate problem: unable to get local issuer certificate'"], "time_elapsed_since_start": 6.000021, "time_elapsed_since_operation_start": 4.386305} Any thoughts on how to get it to get the local issuer certificate, or another way around the issue? Thanks.
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Wi... See more...
Hello,   Attempting to upgrade our test environment from 9.3.2 to 9.4.0 on Windows Server 2019 fails with the following message found in splunk.log: <time> C:\windows\system32\cmd.exe /c "C:\Windows\system32\icacls "C:\Program Files\Splunk" /grant "LocalSystem:(OI)(CI)(F)" /T /C >> "<out to %temp%\splunk.log>" 2>&1" LocalSystem: No mapping between account names and security IDs was done. Successfully processed 0 files;  Failed processing 1 files. Seems pretty straightforward. Attempting to grant Full Access/Control to all files and subdirectories... EXCEPT... It almost certainly should be "NT AUTHORITY\System", not "LocalSystem". Pretty sure this is just a Linux vs Windows nomenclature thing. Are there any suggestions for forcing to permission as the correct account or do I need to open a support ticket to have this fixed in the next release?
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard ... See more...
    I've created two dropdown menu that takes in tokens in my search 1 drop down I get to select server (Token $server$) 2nd drop down to help filter the dashboard into individual applications number I have token($appnumber$) the host usually appears as host = servername-appnumber I tried this: host="$server$-$appnumber$" what am I doing wrong? and advice or help would be appreciated
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apa... See more...
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apache.kafka.connect.runtime.Worker:1391)
Afternoon, I've been beating my head against the keyboard the last few days trying to get this to work. I want to exclude these two event codes from being indexed. This is what my inputs.conf file ... See more...
Afternoon, I've been beating my head against the keyboard the last few days trying to get this to work. I want to exclude these two event codes from being indexed. This is what my inputs.conf file looks like: [default] host = "hostname" [splunktcp://9997] connection_host = ip [WinEventLog://Security] disabled=0 current_only=1 blacklist=5447,6417 I save the file, restart splunk from Settings -> Server Controls -> Restart Splunk. Wait about 30 minutes or so to see if the event codes are being dropped from my index. No Joy.  I've tried adding in sourcetype=WinEventLog:Security, changing the blacklist#, tried using this [WinEventLog://Security] disabled=0 current_only=1 blacklist1= EventCode ="5447" Message="A Windows Filtering Platform filter has changed*" Still no joy.  
Combing through firewall logs.  I am extracting source, destination, dest_port.   I have a csv lookup file with ports and descriptions of those ports, both udp and tcp.   I want to take the descrip... See more...
Combing through firewall logs.  I am extracting source, destination, dest_port.   I have a csv lookup file with ports and descriptions of those ports, both udp and tcp.   I want to take the description from the lookup and add to the results in a table.  Here is my search: | stats count by SRC, DST, DEST_PORT | lookup tcp-udp description OUTPUT description AS desc, port | eval desc=if(DPT = port, description, "not ok") | table SRC, DST, DEST_PORT, port, desc the port and desc field are blank and say "not ok" respectively.  I'm stuck...  
Hi folks, Looking to use es_notable_events as a way of building out a panel that will get info on ES events for the past 7 days, specifically how many alerts were closed by the team and what the al... See more...
Hi folks, Looking to use es_notable_events as a way of building out a panel that will get info on ES events for the past 7 days, specifically how many alerts were closed by the team and what the alert name is. The search I am using is as follows: | `es_notable_events` | search timeDiff_type=current | stats sparkline(sum(count),30m) as sparkline,sum(count) as count by rule_name | sort 100 - count | table rule_name, count This works perfectly for the past 48 hours but it doesn't go back as far as a week (a known limitation when using es_notable_events apparently!). My question is, are there any altenative searches that I can run that will get these results?
I'm trying to learn about splunk for an upcoming position. I recently purchased parallels so I could utilize windows vms. I was trying to set up an indexer on one vm and the forwarder on another and ... See more...
I'm trying to learn about splunk for an upcoming position. I recently purchased parallels so I could utilize windows vms. I was trying to set up an indexer on one vm and the forwarder on another and just mess around with splunks capabilities. Is this even possible? So far it hasn't worked and I have tried a few alterations on the output.conf file in the forwarder. since the VMs have the same public address, I tried to use the private address and I also tried to go by hostname and it still didn't work. Any suggestions?
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lif... See more...
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lifecycle, and I'd rather provide them with RHEL 9, which is now our standard build. The fact that they still use RHEL 7 servers gives you some sense of how long it takes them to move their application to a new(ish) OS. They are insistent that we deploy them RHEL 8 servers so they are "all the same." I want to encourage them to move forward and have a platform that will be fully supported for several  years to come. Is having some servers on RHEL 8 and some on RHEL 9 for a period of time an actual problem? They use version 9.1.2. I found this document: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements It lists support both for x86_64 kernels 4.x (rhel and 5.x (rhel 9). It doesn't elaborate any further.  I know that for various reasons we'd want to eventually have all servers on the same OS version; I'm just wondering if having RHEL 8 and RHEL 9 coexist for a limited period presents an actual problem. I'd appreciate your thoughts.  Daniel