Hey @AsmaF2025, Try using another name for the time token instead of global_time. And use the new name as a token to be passed to the other dashboard. I believe there's a conflict since both of the ...
See more...
Hey @AsmaF2025, Try using another name for the time token instead of global_time. And use the new name as a token to be passed to the other dashboard. I believe there's a conflict since both of the dashboards have global_time present as a token already. Let us know if it works or not and we can troubleshoot further. Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hi, Since one week, the service "splunk-otel-collector" does not start. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Started Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i otelcol[408332...
See more...
Hi, Since one week, the service "splunk-otel-collector" does not start. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Started Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:483: Set config to /etc/otel/collector/agent_config.yaml Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:539: Set memory limit to 460 MiB Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:524: Set soft memory limit set to 460 MiB Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:373: Set garbage collection target percentage (GOGC) to 400 Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:414: set "SPLUNK_LISTEN_INTERFACE" to "127.0.0.1" Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025-07-21T14:00:22.250+0200#011warn#011envprovider@v1.35.0/provider.go:61#011Configuration references unset environment variable#011{"name": "SPLUNK_GATEWAY_URL"} Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: Error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 main.go:92: application run finished with error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Main process exited, code=exited, status=1/FAILURE Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Stopped Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Failed to start Splunk OpenTelemetry Collector. I need help Regards Olivier
Thank you for the answers, but I cannot make curl work in my Windows machine, so created a PowerShell script instead, which works as it supposed to be. I'm also looking forward for the option to run ...
See more...
Thank you for the answers, but I cannot make curl work in my Windows machine, so created a PowerShell script instead, which works as it supposed to be. I'm also looking forward for the option to run a Test via just 1 API call, which obviously make the Post Deployment Checks for our CI/CD Pipeline easier. I also opened a ticket for the support and talking for that. Let's wait and see when Splunk can implement that to Observability API. Thanks for the answers, Best regards
Hello All, Require guidance to pass the default Global time token to be passed from one studio dashboard to another studio dashboard. Both dashboard have the same default global time token , n...
See more...
Hello All, Require guidance to pass the default Global time token to be passed from one studio dashboard to another studio dashboard. Both dashboard have the same default global time token , no changes made. And the token used across the datasource of the respective panels.. i use the below custom url under drilldown to pass the token to another dashbaord. https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.global_time.earliest=$global_time.earliest$&form.global_time.latest=$global_time.latest$ on the redirecting page , below is my input , on redirects it always loads the dashboard as per default value declare on the redirecitng dashbaord. { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "0," }, "title": "Global Time Range" } kindly advice, the time range i select on main dashbaord , should be the same im passing to subdashbaord also.
This is the issue when you connect splunk with AD splunk will not store authentication logs locally and you will not be able to find in settings or in logs i have different siem where i can see ever...
See more...
This is the issue when you connect splunk with AD splunk will not store authentication logs locally and you will not be able to find in settings or in logs i have different siem where i can see everything locally as users are local not through AD
Thanks everyone for your response. The issue was due to DATETIME_CONFIG setting in props.conf .It was set to custom value which was causing packets to drop. setting it DATETIME_CONFIG = NONE helped ...
See more...
Thanks everyone for your response. The issue was due to DATETIME_CONFIG setting in props.conf .It was set to custom value which was causing packets to drop. setting it DATETIME_CONFIG = NONE helped resolve the issue
Are you doing indexed extractions on the JSON data - that's not such a good idea as it can bloat your index with stuff you don't need there. The question is not about "optimising for large datasets"...
See more...
Are you doing indexed extractions on the JSON data - that's not such a good idea as it can bloat your index with stuff you don't need there. The question is not about "optimising for large datasets", it's more about using the right queries for the data you have, large or small. I suggest you post some example queries you have, as the community can offer some advice on whether they are good or not so good - use the code block syntax button above <> For See my post in another thread about performance https://community.splunk.com/t5/Splunk-Search/Best-Search-Performance-when-adding-filtering-of-events-to-query/m-p/750038#M242251 As @PickleRick says, the job inspector is your friend (see scanCount) and reducing that number will improve searches. Use subsearches sparingly, avoid joins and transaction - they are almost never necessary. Summary indexing itself will not necessarily speed up your searches, particularly if the search that creates the summary index is bad and the search that searches the summary index is also bad. A summary index does not mean faster - it's just another index with data and you can still write bad searches against that. Please share some of your worst searches and we can try to help.
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes ...
See more...
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes to apps in /opt/splunk/etc/deployment-apps/app, they called the deployment server and received the changes, but still have issues with managing them. I found a lot of logs on the search-head when I checked the internal index: INFO DC:DeploymentClient [8072 PhonehomeThread] - channel=deploymentServer/phoneHome/default Will retry sending phonehome to DS; err=not_connected There is no problem connecting from UF to DS on port TCP 8089. Does anyone have any ideas on how I could solve this? DS version = 9.3.1 UF version = 9.3.1 $splunk show deploy-poll Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Deployment Server URI is set to "10.121.29.10:8089".
Its not make any sense there is no option for this diagram and support for this... I need read more about this Link Graph. What is the best way to build a diagram for network (ip,vip,fw,subnet...)
Not necessarily. There are separate addons for specific services (separate one for Teams, another for Security (Defender and Defender for endpoint) and so on). This one will cover getting data from ...
See more...
Not necessarily. There are separate addons for specific services (separate one for Teams, another for Security (Defender and Defender for endpoint) and so on). This one will cover getting data from Event Hub but you might need another addon to parse your data properly and map fields to CIM. I'm not sure though if the fact that you're pushing the data through Event Hub won't mangle the events since some of those addons expect the inputs to run differently (Graph API?). You need to go to Splunkbase, type in "microsoft" and check it out
Hi @LS1 , you should try something like this: index=security action IN ("Blocked", "Started", "Success") I hinted to click on the value to be sure that the syntax is correct. Ciao. Giuseppe
Hi @Nawab , if an LDAP user didn't login to Splunk, you don't see it, you can see only users that logged in at least one time. To see the logged in users and the last login timestamp, you can read ...
See more...
Hi @Nawab , if an LDAP user didn't login to Splunk, you don't see it, you can see only users that logged in at least one time. To see the logged in users and the last login timestamp, you can read a simpe search like the following: index=_audit action=success sourcetype=audittrail
| stats latest(_time) AS _time count BY user It's the same thing if you try to see by GUI the list of users in [Settings > Users]: you can see only internal users and the LDAP users that logged in. Ciao. Giuseppe
Hello GCusello, yes I clicked on the word(s) "Blocked" and "Started" in the "Action" field window. When I use the query index=security action="*" all three actions: Blocked, Started and Success app...
See more...
Hello GCusello, yes I clicked on the word(s) "Blocked" and "Started" in the "Action" field window. When I use the query index=security action="*" all three actions: Blocked, Started and Success appear as shown in my original question. If I click on "Success" all of my events are returned, when I click on the other two, my results are "No results found". I went down the list of Interesting Fields and tried all of the fields labeled with an (not sure how to type that one) instead of an octothorp (#) and every one of them worked properly. When I say I tried, I mean I opened the Interesting Fields and clicked on the desired selection, which alters the search criteria, the same way I have done with Blocked and Started. I do not know how the categories get created in the Interesting Fields but it appears there is something wrong with Blocked and Started.
@siv Dashboard Studio does not support custom visualizations (like Network Diagram Viz from Splunkbase). These visualizations are only supported in Classic (Simple XML) dashboards. If you want to ...
See more...
@siv Dashboard Studio does not support custom visualizations (like Network Diagram Viz from Splunkbase). These visualizations are only supported in Classic (Simple XML) dashboards. If you want to stay in Dashboard Studio, use the built-in Link Graph like this
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work. I tried below query but it only show...
See more...
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work. I tried below query but it only shows lastest users not all. | rest /services/authentication/httpauth-tokens splunk_server=* | table timeAccessed userName splunk_server Also I want to know when a user was created on splunk as well, as users are created via LDAP