All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am en... See more...
Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am encountering a few issues: Status Name Field: This field, intended to provide the status of the incident (with a default value if not specified), is currently returning blank results. Summary and Notes Fields: These fields are returning incorrect data, displaying random strings instead of the expected information. Escalation Priority: The inclusion of the "status" field was an attempt to retrieve escalation priority, but it is populating with a random field that does not accurately reflect the case priority (1-5). I also tried to use the mc_investigations_lookup table but this too doesn't display current case statue or priority. Any guidance or support in resolving these issues would be greatly appreciated. SPL: | mcincidents | `get_realname(creator)` | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | `investigation_get_current_status` | `investigation_get_collaborator_count` | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | table time, age, status, status_name, display_id, name, description, assignee, summary
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is th... See more...
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is the support level or details after this EOL date?
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose ... See more...
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose I get the following: I'm a bit stuck. Can anyone help me achieve my goal (getting a table similar to the first table just above) ? Thanks 
Good day for everyone, I've built multiple use-cases through correlation search. The concern here , I am getting multiple alerts for same case. how can I set it to give only one alert contain all ... See more...
Good day for everyone, I've built multiple use-cases through correlation search. The concern here , I am getting multiple alerts for same case. how can I set it to give only one alert contain all data. screenshot can explain more:   
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using  | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls"   My time zone preference is (GMT+01:00) Gr... See more...
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using  | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls"   My time zone preference is (GMT+01:00) Greenwich Mean Time : London and the report I am referring to (All Calls) is also created by me and runs every 15 mins. Now, when I use this report in a simple xml dashboard, it only provided data as on an hour ago. Example: when the report runs at 08:00 AM and I check dashboard at 08:05 AM, it will show report results for 07:00 AM run and not the latest. I expect this to be due to recent day light saving time changes in UK. Can someone please help how should I handle this? Thank you. Regards, Madhav
Hi, I've been working on the frontend developer for six months now... I'm using splunk/react-ui toolkit. https://splunkui.splunk.com/Toolkits/SUIT/Overview https://www.npmjs.com/package/@splu... See more...
Hi, I've been working on the frontend developer for six months now... I'm using splunk/react-ui toolkit. https://splunkui.splunk.com/Toolkits/SUIT/Overview https://www.npmjs.com/package/@splunk/create I'm created dashboard pages to use @splunk/create command. but, I think that the dashboards seems like to be unable to share the state with other dashboards. Is there any way to make this possible?   ↑ my project directory(/pacakges/<splunk app>). this is dashboard pages.   Even with react-router, state sharing between dashboard pages was not possible.    Splunk React Ui can't share data props between dashboard pages, and there's no document on how to use it. somebody help me
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 s... See more...
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 search="savedsearch \"$title$\" etime=\"$dayEarliest$\" ltime=\"$dayLatest$\" | addinfo | collect index=INDEXNAME testmode=false | search" Error i get: [map]: No results to summary index. Why?
I want to add a trendline to this chart: index=my_index | timechart dc(USER) as DISTINCT_USERS How do I accomplish this? Thanks! Jonathan
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin conce... See more...
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin concepts. Please help me in these queries: We have 6 indexers (hosted on AWS cloud as EC2 but not Splunk cloud) with 6.9TB disk storage and 1.5GB/day license. Is this ok? I am checking for retention period but nowhere set with frozentimeperiodinsecs or maxTotalDataSizeMB in local. But in default it is there... I am also looking whether archival location is set or not. indexes.conf in Cluster Manager: [new_index] homePath   = volume:primary/$_index_name/db coldPath   = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb   volumes indexes.conf:   [volume:primary] path = $SPLUNK_DB #maxVolumeDataSizeMB = 6000000   there is one more app which is pushing to indexers with indexes.conf: (not at all aware of this)   [default] remotePath = volume:aws_s3_vol/$_index_name maxDataSize = 750 [volume:aws_s3_vol] storageType = remote path = s3://conn-splunk-prod-smartstore/ remote.s3.auth_region = eu-west-1 remote.s3.bucket_name = conn-splunk-prod-smartstore remote.s3.encryption = sse-kms remote.s3.kms.key_id = XXXX remote.s3.supports_versioning = false   and I don't see coldToFrozenDir and coldToFrozenScript is also not mentioned anywhere.   Now are we storing archival data in S3 bucket now? but there mentioned maxDataSize which is related to hot to warm. So apart from hot bucket data, rest all data is storing in s3 bucket now?    So how will Splunk take the data from S3 bucket to search and make queries?
Is it possible to identify obsolete dashboards? or the last time a dashboard was executed?
Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offe... See more...
Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offering up 514udp which is unavailable according to Splunk, or a HEC configuration without tokens so Splunk is (would?) drop the events our option appear limited. Has anyone else come across this issue and solved it?   
Hi Splunk Community, We've developed a new version of our Splunk app and recently published it to Splunkbase. However, we're facing issues when upgrading the app via Manage Apps in Splunk Web. Wh... See more...
Hi Splunk Community, We've developed a new version of our Splunk app and recently published it to Splunkbase. However, we're facing issues when upgrading the app via Manage Apps in Splunk Web. What's happening: After upgrading the app, it stops functioning . We've added new input fields to the setup page (setup.xml), but these changes do not reflect immediately in the UI after upgrade. The new fields only show up after clearing the browser cache and doing a hard reload. Interestingly, if we completely remove the old version of the app and do a fresh install of the new version from Splunkbase, everything works perfectly — the setup UI loads correctly, and logs appear as expected. Any suggestion would be highly appreciated. Thanks  
Hi, We have Splunk ITSI and team wants to have Batch Job Monitoring with Splunk ITSI capabilities. With this, do we have some pre-built dashboards or services/KPI SPL queries available that you can ... See more...
Hi, We have Splunk ITSI and team wants to have Batch Job Monitoring with Splunk ITSI capabilities. With this, do we have some pre-built dashboards or services/KPI SPL queries available that you can share with us to start with? We will appreciate your guidance on this. Looking forward to your response.
Hi All, I would like to read Splunk indexed log/data using text editor tool (like Notepad, etc.). I understand Splunk indexed logs are compressed and further processed which creates a Splunk only rea... See more...
Hi All, I would like to read Splunk indexed log/data using text editor tool (like Notepad, etc.). I understand Splunk indexed logs are compressed and further processed which creates a Splunk only readable format (Not-Human readable) - which works well for searching or other capabilities of Splunk.  But, wondering if there is way to convert these indexed logs into a human readable format?  
AppDynamics Native Agent Not Loading with Nexe-bundled NodeJS Application Environment - Node.js version: 20.9.0 - AppDynamics Node.js Agent version: Latest - Nexe version: Latest - OS: Windows S... See more...
AppDynamics Native Agent Not Loading with Nexe-bundled NodeJS Application Environment - Node.js version: 20.9.0 - AppDynamics Node.js Agent version: Latest - Nexe version: Latest - OS: Windows Server - Build tool: Nexe with custom build script Project Structure We have a Node.js API service that uses: - Express.js for REST API - Native modules (specifically AppDynamics agent) - Various npm packages for business logic - Built and distributed as a single executable using Nexe   Issue When running our Nexe-bundled application, the AppDynamics agent fails to initialize with the following error:   Appdynamics agent cannot be initialized due to Error: Missing required module. \\?\C:\Path\nodejs_api\node_modules\appdynamics-libagent-napi\appd_libagent.node TypeError: Cannot read properties of undefined (reading 'init') at LibagentConnector.init What We've Tried 1. Including the native module as a resource in Nexe build: --resource "./node_modules/appdynamics-libagent-napi/appd_libagent.node" 2. Copying the native module to the correct directory structure in distribution: dist/ - api_node.exe - node_modules/ -- appdynamics-libagent-napi/ --- appd_libagent.node   According to Nexe's documentation, when dealing with native modules (.node files), the module should be placed in the `node_modules` directory relative to the compiled executable's location. This means that while the application is bundled into a single executable, native modules are expected to be loaded from the filesystem at runtime. Question How can we properly bundle and load the AppDynamics native agent with a Nexe-compiled Node.js application? Is there a specific configuration or approach needed for native modules, particularly AppDynamics, to work with Nexe? Any guidance or working examples would be greatly appreciated.
Hey everyone, I’m working on a Splunk dashboard where one of the panels is supposed to show real-time logins from a specific source. The search runs fine when I do it manually in the search bar, but... See more...
Hey everyone, I’m working on a Splunk dashboard where one of the panels is supposed to show real-time logins from a specific source. The search runs fine when I do it manually in the search bar, but when it’s in the dashboard panel, it doesn’t seem to update properly unless I refresh the whole page. The time picker is set to "Last 15 minutes (real-time)" and the auto-refresh is on, but the panel still gets stuck with old data sometimes. Has anyone run into something like this before? Could it be a refresh interval issue or something in the panel settings I’m missing? Thanks in advance for any tips! Looking to enhance the durability and appearance of your concrete surfaces? Contact Concrete Contractors Richmond VA today for expert concrete lifting, repair, and resurfacing services! Get your free estimate now!
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng ... See more...
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng is running as root and log file directory owned by splunk user. Job for syslog-ng.service failed because the control process exited with error code. and systemctl status syslog-ng.service × syslog-ng.service - System Logger Daemon Loaded: loaded (/usr/lib/systemd/system/syslog-ng.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Sat 2025-04-05 11:39:04 UTC; 9s ago Docs: man:syslog-ng(8) Process: 1800 ExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS (code=exited, status=1/FAILURE) Main PID: 1800 (code=exited, status=1/FAILURE) Status: "Starting up... (Sat Apr 5 11:39:04 2025" CPU: 4ms Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Scheduled restart job, restart counter is at 5. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Start request repeated too quickly. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Failed with result 'exit-code'. Apr 05 11:39:04 if2 systemd[1]: Failed to start syslog-ng.service - System Logger Daemon.
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation ... See more...
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation up and running. I have installed a universal forwarder on a linux server and was able to send /var/log to the splunk install.   I find various post that state * I should be using the Splunk Add-on for Unix and Linux * it needs to be installed on the forwarder * I should be using a deployment server instead of configuring locally on the linux server.   Looking for information on how to actually install a deployment server. I seem to be going in circles between pages with old comments (pre 2016, https://community.splunk.com/t5/Deployment-Architecture/How-to-configure-a-deployment-server/m-p/131015/thread-id/4975) and broken links, or page explaining why I would need a deployment server. Questions : Do I need to bother with deployment server at this stage ? Is it really bad if I install "Splunk Add-on for Unix and Linux" locally ? and how do I actually locally, the insatt Can you point me to a basic step by step explanation of how I can install a deployment server ? This is intended for a test, can we add the deployment server capability to our Splunk server created with docker compose ?
Hi All,   could you please clarify me what is the diff between data models and splunk dashboards?   Thanks
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But ... See more...
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But i want to pass those as input values by selecting input time provided on dashboard  and then remaining part of query I want to run for whole day or lets say another time range . becuse it is possible that request i have received during mentioned time might get process later at dayy.How can I achieve this . Also I want to hide few columns at end like message guid , request time and output time .   <panel> <table> <title>Contact -Timings</title> <search> <query>```query for apigateway call``` index=aws* earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" | rex field=_raw "Method response body after transformations: (?&lt;json&gt;[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageGUID | append ```query for event brigdel``` [ search index="aws_np" | rex field=_raw "messageGUID\": String\(\"(?&lt;messageGUID&gt;[^\"]+)" | rex field=_raw "source\": String\(\"(?&lt;source&gt;[^\"]+)" | rex field=_raw "type\": String\(\"(?&lt;type&gt;[^\"]+)" | where source="MDM" and type="Contact" ```and messageGUID="0461870f-ee8a-96cd-3db6-1ca1f6dbeb30"``` | rename _time as output_time | dedup messageGUID ] | stats values(request_time) as request_time values(output_time) as output_time by messageGUID | where isnotnull(output_time) and isnotnull(request_time) | eval timeTaken=(output_time-request_time)/60| convert ctime(output_time)| convert ctime(request_time) | eventstats avg(timeTaken) min(timeTaken) max(timeTaken) count(messageGUID) | head 1</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel>