All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have the below configuration in my logback.xml. While the url, token, index sourcetype and disableCertificateValidation fields are getting picked up, the batchInterval, batchCount and sendMode are ... See more...
I have the below configuration in my logback.xml. While the url, token, index sourcetype and disableCertificateValidation fields are getting picked up, the batchInterval, batchCount and sendMode are not. I ran my application in debug mode, and I did see that the `ch.qos.logback.core.model.processor.AppenderModelHandler` is picking up the these tags as submodels correctly. Can someone please help me understand if I'm doing anything wrong here? <?xml version="1.0" encoding="UTF-8"?> <configuration> <appender name="SPLUNK_HTTP" class="com.splunk.logging.HttpEventCollectorLogbackAppender"> <url>my-splunk-url</url> <token>my-splunk-token</token> <index>my-index</index> <sourcetype>${USER}_local</sourcetype> <disableCertificateValidation>true</disableCertificateValidation> <batchInterval>1</batchInterval> <batchCount>1000</batchCount> <sendMode>parallel</sendMode> <retriesOnError>1</retriesOnError> <layout class="my-layout-class"> <!-- some custom layout configs --> </layout> </appender> <logger name="com.myapplication" level="DEBUG" additivity="false"> <appender-ref ref="SPLUNK_HTTP"/> </logger> <root level="DEBUG"> <appender-ref ref="SPLUNK_HTTP"/> </root> </configuration>   I'm using the following dependency for splunk, if it matters -  <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.7</version> </dependency>    
Did you read my question carefully? Users can delete their own alerts, but they cannot delete their own reports. If they didn’t have the "delete_knowledge_objects" capability, how would they even b... See more...
Did you read my question carefully? Users can delete their own alerts, but they cannot delete their own reports. If they didn’t have the "delete_knowledge_objects" capability, how would they even be able to delete their own alerts in the first place? This inconsistency is exactly the issue I’m trying to highlight.
@m_zandinia  To delete a report in Splunk, a user must have the delete capability assigned to their role. This capability allows users to delete knowledge objects such as saved reports, alerts, and ... See more...
@m_zandinia  To delete a report in Splunk, a user must have the delete capability assigned to their role. This capability allows users to delete knowledge objects such as saved reports, alerts, and dashboards that they own or have permission to manage. Create and edit reports - Splunk Documentation By default admin has the capability called "admin_all_objects", so he can delete the report.  admin: Created test user called "test-user" and assigned user & power user role to that, and can see that admin_all_objects has not assigned.       
It's not an orphaned knowledge object. A user creates a report and immediately afterward is unable to delete it. A user creates an alert and can delete it immediately without issue. The report i... See more...
It's not an orphaned knowledge object. A user creates a report and immediately afterward is unable to delete it. A user creates an alert and can delete it immediately without issue. The report is created in the correct location (as shown in the screenshot below).     However, the delete request sent by the Splunk UI is targeting the wrong URL, and I’m not sure why this is happening.   P.S. The admin is able to delete the report without any issue. P.S. For privacy reasons, I manually changed the report name to "test-user-report" in the log samples. The actual report name matches what is shown in the screenshot.  
@clumicao  I haven't worked on Mission Control before, but you can check this documentation – it might be helpful. Apply filters and save filtered views for incidents Triage incidents using incide... See more...
@clumicao  I haven't worked on Mission Control before, but you can check this documentation – it might be helpful. Apply filters and save filtered views for incidents Triage incidents using incident review in Splunk Mission Control - Splunk Documentation
I forgot what it felt like for someone to copy my assignment in homeroom 
Hi @m_zandinia  It sounds like in your scenario that your users can delete their own reports / knowledge objects within the app, but not owned by "nobody" (and probably not able to delete anything o... See more...
Hi @m_zandinia  It sounds like in your scenario that your users can delete their own reports / knowledge objects within the app, but not owned by "nobody" (and probably not able to delete anything owned by any other user either). In order for them to delete App-level shared objects they will need to have write permission to the app - does their role have this? (Or the admin_all_objects capability) Additionally if the knowledge object is globally shared they would need the admin_all_objects capability. @kiran_panchavat Im not convinced those answers from 10 years ago are still valid, or only partially valid - If you delete a user which owns knowledge objects the owner does not get changed to "nobody". It stays owned by the original owner but becomes Orphaned: In the below example I have 2 searches "owned" by "testing1" which I deleted, thus they become orphaned and still owned by the testing1 user. Its not uncommon for things to be owned by "nobody" - whilst I prefer the use of service accounts, a lot of customers use the nobody user for owning artifacts which dont have a specific owner within an app (e.g. no specific named person, thus "nobody"). According to the docs an orphaned search "will not run the scheduled report on its schedule at all" - whereas a search owned by "nobody" will. Its important to know that searches which are owned by "nobody" do get executed and will be essentially run as the system user, so will have access to all indexes, lookups etc, and therefore if someone has write access to a search owned by nobody that they could modify it to search in indexes which they themselves are now allowed to search! This is why I always recommend searches to be owned by a service account following the principals of least-privileged access.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@m_zandinia  Check this  | rest /services/saved/searches | table title, eai:acl.owner, eai:acl.app, eai:acl.sharing, eai:acl.perms.read, eai:acl.perms.write | rename title as "Report Name", ea... See more...
@m_zandinia  Check this  | rest /services/saved/searches | table title, eai:acl.owner, eai:acl.app, eai:acl.sharing, eai:acl.perms.read, eai:acl.perms.write | rename title as "Report Name", eai:acl.owner as "Owner", eai:acl.app as "App", eai:acl.sharing as "Sharing Level", eai:acl.perms.read as "Read Permissions", eai:acl.perms.write as "Write Permissions" | sort App, "Report Name"    
@m_zandinia  Are they unable to delete all reports or just specific ones? Did you check the permissions for the reports? 
@m_zandinia  It means that the user that created the object is no longer a user in the authenticating system. If you create a local user, then login as that user, then create any knowledge object, t... See more...
@m_zandinia  It means that the user that created the object is no longer a user in the authenticating system. If you create a local user, then login as that user, then create any knowledge object, then delete that user, then all of his KOs will switch to be owned by nobody. What are "splunk-system-user" and nobody"? - Splunk Community Solved: What does 'nobody' (under owner column) signify in... - Splunk Community saved searches become orphaned or are associated with users that no longer exist. In such cases, reassigning the saved search to an existing valid user and then deleting it via the GUI can resolve the issue. https://community.splunk.com/t5/Deployment-Architecture/Orphaned-Scheduled-Search-cannot-delete/m-p/221185 
Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Crea... See more...
Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Create alerts Delete alerts Create reports However, they are unable to delete reports. Investigation Details From the _internal logs, here’s what I observed: When deleting an alert — the deletion works fine: 192.168.0.1 - user [17/May/2025:11:06:59.687 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/username/SOC/saved/searches/test-user-alert?output_mode=json HTTP/1.1" 200 421 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 68ms 192.168.0.1 - user [17/May/2025:11:06:59.690 +0000] "DELETE /servicesNS/username/SOC/saved/searches/test-user-alert HTTP/1.1" 200 421 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 65ms   When deleting a report — it fails with a 404 Not Found: 192.168.0.1 - user [17/May/2025:10:27:51.699 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/nobody/SOC/saved/searches/test-user-report?output_mode=json HTTP/1.1" 404 84 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 5ms 192.168.0.1 - user [17/May/2025:10:27:51.702 +0000] "DELETE /servicesNS/nobody/SOC/saved/searches/test-user-report HTTP/1.1" 404 84 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 1ms   Alerts are created under the user’s namespace (servicesNS/username/...) and can be deleted by the user. Reports appear to be created under the nobody namespace (servicesNS/nobody/...), which may be the reason users lack permission to delete them. Has anyone faced a similar issue?
Hi @Na_Kang_Lim  @livehybrid  is 100% right, TrackMe V1 is out of date and unsupported since more than 2 years now, therefore I wouldn't respond on this. Please consider upgraded to TrackMe V2: h... See more...
Hi @Na_Kang_Lim  @livehybrid  is 100% right, TrackMe V1 is out of date and unsupported since more than 2 years now, therefore I wouldn't respond on this. Please consider upgraded to TrackMe V2: https://docs.trackme-solutions.com there are options to migrate from V1 to V2 (https://docs.trackme-solutions.com/latest/migration_trackmev1.html) but I would suggest to consider a fresh configuration instead rather than migrating stuffs.
I've seen this exact issue before with Splunk Universal Forwarders. The "splunkd.pid doesn't exist" error combined with the "tcp_conn_open_afux ossocket_connect failed" messages typically happens whe... See more...
I've seen this exact issue before with Splunk Universal Forwarders. The "splunkd.pid doesn't exist" error combined with the "tcp_conn_open_afux ossocket_connect failed" messages typically happens when there's a conflict between how the Splunk process is started and managed. Based on your description, this is likely one of two issues: a. Duplicate systemd service files causing a "split brain" situation b. Permission problems with the Splunk installation directory For the first issue, check if you have duplicate service definitions: ls -la /usr/lib/systemd/system/SplunkForwarder.service ls -la /etc/systemd/system/SplunkForwarder.service If both exist, that's causing your problem! The one in /etc/systemd/system takes precedence, and they might have different user/permission settings. You can fix this by: sudo rm /etc/systemd/system/SplunkForwarder.service sudo systemctl daemon-reload sudo systemctl restart SplunkForwarder If that doesn't work, check the ownership of your Splunk files: ls -la /opt/splunkforwarder Make sure everything is owned by the correct user (typically splunk:splunk). If permissions are wrong, you can fix with: chown -R splunk:splunk /opt/splunkforwarder As a last resort, the complete reinstall approach works well: sudo systemctl stop SplunkForwarder sudo yum remove splunk* sudo rm -rf /opt/splunkforwarder Then reinstall the forwarder and configure it properly. I've had good success with this approach when dealing with these mysterious pid and socket connection errors. Please give for support happly splunking ....
Hi @splunkville  No, this will not work because the source key (cmd_data) contains the shortened version which has been broken up due to the space. Your transforms.conf and props.conf configs need ... See more...
Hi @splunkville  No, this will not work because the source key (cmd_data) contains the shortened version which has been broken up due to the space. Your transforms.conf and props.conf configs need adjustment. To extract the full value after cmd_data=, use this in transforms.conf: == props.conf == [yourSourceytype] REPORT-full_cmd = full_cmd == transforms.conf == [full_cmd] REGEX = cmd_data=([^\]]+)\] FORMAT = full_cmd::$1 The REGEX captures everything after cmd_data= up to the "]".   REPORT- in props.conf applies the transform at search time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? tran... See more...
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? transforms.conf [full_cmd] SOURCE_KEY = cmd_data REGEX = (cmd_data)\S(?<full_cmd>.*) FORMAT = full_cmd::$1 props.conf EXTRACT-field full_cmd
I know it has been a while since this question was asked but here is a simple xml dashboard that will show all the other dashboards in the app along with the descriptions and links to them. It is pop... See more...
I know it has been a while since this question was asked but here is a simple xml dashboard that will show all the other dashboards in the app along with the descriptions and links to them. It is populated dynamically with a rest search. All you would need to do would be to edit the app nav to look something like the below example if you call the dashboard "overview". If you don't call it "overview" then change the first view element to whatever you named the dashboard. Specifically, it is looking for the name that is shown in the URL bar, not the editable label in the dashboard. Navigation <nav search_view="search"> <view name="overview" default="true" /> <view name="search" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> </nav>   Dashboard <dashboard version="1.1" theme="dark"> <label>Overview</label> <row id="cards"> <panel> <table> <search> <query>| rest /servicesNS/-/$env:app$/data/ui/views splunk_server=local | rename eai:acl.app as app | search isDashboard=1 app=$env:app$ title!=$env:page$ | table label description app title | sort label</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <drilldown> <link target="_blank">/app/$row.app$/$row.title$</link> </drilldown> </table> </panel> </row> <row id="styles_row"> <panel> <html> <style> /* Hide the styles panel */ #styles_row { display: none; } /* Remove search tools (refresh, etc) */ .element-footer.dashboard-element-footer { display: none !important; } /* Remove hover background color */ #statistics .results-table tbody td.highlighted { background-color: #FAFAFA !important; } /* Remove hover background color dark mode */ .dashboard-panel[class*="dashboardPanel---pages-dark"] #statistics .results-table tbody td.highlighted { background-color: #31373B !important; } #statistics.results-table { padding: 10px; box-sizing: border-box !important; } /* Style the table to make it just a simple container element */ body table { width: 100% !important; min-width: 100% !important; display: block; box-sizing: border-box !important; border: none !important; -webkit-box-shadow: none !important; box-shadow: none !important; } [id^="cards"] table tbody td { font-family: Splunk Platform Sans,Proxima Nova,Roboto,Droid,Helvetica Neue,Helvetica,Arial,sans-serif !important; } /* Hide the table header */ thead { display: none; } /* Make the tbody a grid layout */ tbody { display: grid; grid-template-columns: 33% 33% 33%; box-sizing: border-box !important; column-gap: 10px; row-gap: 10px; border: none !important; } /* Bold the dashboard title */ tr td:nth-child(1) { font-weight: bold; } /* Make the card text black in light mode */ tr td:nth-child(2) { color: #000000 !important; } /* Make the card text white in dark mode */ .dashboard-panel[class*="dashboardPanel---pages-dark"] tr td:nth-child(2) { color: #FFFFFF !important; } /* Hide the 3rd (app) and 4th (title) columns */ tr td:nth-child(3), tr td:nth-child(4) { display: none; } /* Turn the table rows into cards */ tbody tr { border-radius: 5px; border: 1px solid #999999; padding: 10px; } tbody tr, tbody tr td { box-sizing: border-box !important; display: block; background: #FAFAFA !important; } .dashboard-panel[class*="dashboardPanel---pages-dark"] tbody tr, .dashboard-panel[class*="dashboardPanel---pages-dark"] tbody tr td { background: #31373B !important; } .table td { padding-top: 0 !important; padding-bottom: 0 !important; } </style> </html> </panel> </row> </dashboard>  
Hi @Na_Kang_Lim  Based on the lookup name, It sounds like you are on TrackMe V1, have you considered upgrading to V2.x? There are a bunch of bug fixes which could be impacting your issue here, and n... See more...
Hi @Na_Kang_Lim  Based on the lookup name, It sounds like you are on TrackMe V1, have you considered upgrading to V2.x? There are a bunch of bug fixes which could be impacting your issue here, and new features you can use in the later versions. @asimit Im intrigued by the "trackme_max_data_tracker_history" macro - where can I find this? I cant see it in my installation of TrackMe (V1 or V2)! Also the REST endpoint you doesnt seem to exist. 
Hi @token2  I disagree with some of the information in the markdown posted on the other post, specifically around the API usage (" Gathers performance data through the vCenter API") - This is not co... See more...
Hi @token2  I disagree with some of the information in the markdown posted on the other post, specifically around the API usage (" Gathers performance data through the vCenter API") - This is not correct, neither of the apps mentioned connect to the API, the vCenter app uses syslog+monitor inputs (file monitoring) to pick up events, the ESXi app is purely syslog. The Splunk_TA_vcenter (Splunk Add-on for vCenter Log) should be installed on a Splunk Universal Forwarder running on the vCenter Server host, so it can monitor vCenter log files directly from the filesystem. This takes vCenter logs only, which last time I checked didnt seem to have the individual ESXi logs. The Splunk Add-on for VMware ESXi Logs should be installed on a Splunk forwarder or heavy forwarder that is receiving syslog data from the ESXi hosts, if you install this on the same host as the vCenter app then ensure you use a unique syslog port for this so the sourcetype field extractions can work correctly. If you want performance info/metrics etc then you need "Splunk Add-on for VMware Metrics": The Splunk Add-on for VMware Metrics is a collection of add-ons used to collect and transform the Performance, Inventory, Tasks, and Events data from VMware vCenters, ESXi hosts, and virtual machines. The Splunk Add-on for VMware Metrics contains the following components: Splunk_TA_vmware_inframon - Runs a Python-based API data collection engine, collects data from VMware vSphere environment, and performs field extractions for VMware data. SA-Hydra-inframon. Depending on your usecase you might prefer to use all, or a specific subset, of the many VMware apps available! Please let me know if you want further clarity on any of these and feel free to share your usecases so we can help refine which apps might benefit you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @asah  No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/U... See more...
Hi @asah  No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/Universal Forwarders. *HOWEVER* The Splunk Add-on for the OpenTelemetry Collector can be deployed to a Splunk Forwarder (UF/HF) via a Deployment Server, and this app aims to solve this issue and actually allow management of Otel via the DS.  By deploying the Splunk Distribution of the OpenTelemetry Collector as an add-on, customers wishing to expand to Observability can do so more easily, by taking advantage of existing tooling and know-how about using Splunk Deployment Server or other tools to manage Technical Add-Ons and .conf files. You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any technical add-on  Check out this blog post for more info: https://www.splunk.com/en_us/blog/devops/announcing-the-splunk-add-on-for-opentelemetry-collector.html And also this page on how to configure it: https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-addon/collector-addon-configure-instance.html So in short, whilst you cant manage your existing K8s deployment of Otel, you could switch to using UFs which connect back to your DS and pull their config from there, if you are willing to switch out to a UF...but then if you're going to install a UF to manage Otel, you might as well send the logs via the UF to Splunk Cloud?! (Unless there is another reason you need/want Otel, such as instrumentation).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing Since you're already testing the Otel collector on your K8s cluster I assume you've already sorted out that side of the deployment process but incase its of any help there are some docs  at https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/collector-linux-intro.html#collector-linux-intro and https://docs.splunk.com/observability/en/gdi/opentelemetry/deployment-modes.html which may be useful.  Regarding Splunk Add-on for the OpenTelemetry Collector, this 
Hey @asimit  Out of interest, what LLM are you using to generates these responses?  By the way, half of the links you posted are hallucinations.