All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Crea... See more...
Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Create alerts Delete alerts Create reports However, they are unable to delete reports. Investigation Details From the _internal logs, here’s what I observed: When deleting an alert — the deletion works fine: 192.168.0.1 - user [17/May/2025:11:06:59.687 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/username/SOC/saved/searches/test-user-alert?output_mode=json HTTP/1.1" 200 421 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 68ms 192.168.0.1 - user [17/May/2025:11:06:59.690 +0000] "DELETE /servicesNS/username/SOC/saved/searches/test-user-alert HTTP/1.1" 200 421 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 65ms   When deleting a report — it fails with a 404 Not Found: 192.168.0.1 - user [17/May/2025:10:27:51.699 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/nobody/SOC/saved/searches/test-user-report?output_mode=json HTTP/1.1" 404 84 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 5ms 192.168.0.1 - user [17/May/2025:10:27:51.702 +0000] "DELETE /servicesNS/nobody/SOC/saved/searches/test-user-report HTTP/1.1" 404 84 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 1ms   Alerts are created under the user’s namespace (servicesNS/username/...) and can be deleted by the user. Reports appear to be created under the nobody namespace (servicesNS/nobody/...), which may be the reason users lack permission to delete them. Has anyone faced a similar issue?
Hi @Na_Kang_Lim  @livehybrid  is 100% right, TrackMe V1 is out of date and unsupported since more than 2 years now, therefore I wouldn't respond on this. Please consider upgraded to TrackMe V2: h... See more...
Hi @Na_Kang_Lim  @livehybrid  is 100% right, TrackMe V1 is out of date and unsupported since more than 2 years now, therefore I wouldn't respond on this. Please consider upgraded to TrackMe V2: https://docs.trackme-solutions.com there are options to migrate from V1 to V2 (https://docs.trackme-solutions.com/latest/migration_trackmev1.html) but I would suggest to consider a fresh configuration instead rather than migrating stuffs.
I've seen this exact issue before with Splunk Universal Forwarders. The "splunkd.pid doesn't exist" error combined with the "tcp_conn_open_afux ossocket_connect failed" messages typically happens whe... See more...
I've seen this exact issue before with Splunk Universal Forwarders. The "splunkd.pid doesn't exist" error combined with the "tcp_conn_open_afux ossocket_connect failed" messages typically happens when there's a conflict between how the Splunk process is started and managed. Based on your description, this is likely one of two issues: a. Duplicate systemd service files causing a "split brain" situation b. Permission problems with the Splunk installation directory For the first issue, check if you have duplicate service definitions: ls -la /usr/lib/systemd/system/SplunkForwarder.service ls -la /etc/systemd/system/SplunkForwarder.service If both exist, that's causing your problem! The one in /etc/systemd/system takes precedence, and they might have different user/permission settings. You can fix this by: sudo rm /etc/systemd/system/SplunkForwarder.service sudo systemctl daemon-reload sudo systemctl restart SplunkForwarder If that doesn't work, check the ownership of your Splunk files: ls -la /opt/splunkforwarder Make sure everything is owned by the correct user (typically splunk:splunk). If permissions are wrong, you can fix with: chown -R splunk:splunk /opt/splunkforwarder As a last resort, the complete reinstall approach works well: sudo systemctl stop SplunkForwarder sudo yum remove splunk* sudo rm -rf /opt/splunkforwarder Then reinstall the forwarder and configure it properly. I've had good success with this approach when dealing with these mysterious pid and socket connection errors. Please give for support happly splunking ....
Hi @splunkville  No, this will not work because the source key (cmd_data) contains the shortened version which has been broken up due to the space. Your transforms.conf and props.conf configs need ... See more...
Hi @splunkville  No, this will not work because the source key (cmd_data) contains the shortened version which has been broken up due to the space. Your transforms.conf and props.conf configs need adjustment. To extract the full value after cmd_data=, use this in transforms.conf: == props.conf == [yourSourceytype] REPORT-full_cmd = full_cmd == transforms.conf == [full_cmd] REGEX = cmd_data=([^\]]+)\] FORMAT = full_cmd::$1 The REGEX captures everything after cmd_data= up to the "]".   REPORT- in props.conf applies the transform at search time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? tran... See more...
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? transforms.conf [full_cmd] SOURCE_KEY = cmd_data REGEX = (cmd_data)\S(?<full_cmd>.*) FORMAT = full_cmd::$1 props.conf EXTRACT-field full_cmd
I know it has been a while since this question was asked but here is a simple xml dashboard that will show all the other dashboards in the app along with the descriptions and links to them. It is pop... See more...
I know it has been a while since this question was asked but here is a simple xml dashboard that will show all the other dashboards in the app along with the descriptions and links to them. It is populated dynamically with a rest search. All you would need to do would be to edit the app nav to look something like the below example if you call the dashboard "overview". If you don't call it "overview" then change the first view element to whatever you named the dashboard. Specifically, it is looking for the name that is shown in the URL bar, not the editable label in the dashboard. Navigation <nav search_view="search"> <view name="overview" default="true" /> <view name="search" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> </nav>   Dashboard <dashboard version="1.1" theme="dark"> <label>Overview</label> <row id="cards"> <panel> <table> <search> <query>| rest /servicesNS/-/$env:app$/data/ui/views splunk_server=local | rename eai:acl.app as app | search isDashboard=1 app=$env:app$ title!=$env:page$ | table label description app title | sort label</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <drilldown> <link target="_blank">/app/$row.app$/$row.title$</link> </drilldown> </table> </panel> </row> <row id="styles_row"> <panel> <html> <style> /* Hide the styles panel */ #styles_row { display: none; } /* Remove search tools (refresh, etc) */ .element-footer.dashboard-element-footer { display: none !important; } /* Remove hover background color */ #statistics .results-table tbody td.highlighted { background-color: #FAFAFA !important; } /* Remove hover background color dark mode */ .dashboard-panel[class*="dashboardPanel---pages-dark"] #statistics .results-table tbody td.highlighted { background-color: #31373B !important; } #statistics.results-table { padding: 10px; box-sizing: border-box !important; } /* Style the table to make it just a simple container element */ body table { width: 100% !important; min-width: 100% !important; display: block; box-sizing: border-box !important; border: none !important; -webkit-box-shadow: none !important; box-shadow: none !important; } [id^="cards"] table tbody td { font-family: Splunk Platform Sans,Proxima Nova,Roboto,Droid,Helvetica Neue,Helvetica,Arial,sans-serif !important; } /* Hide the table header */ thead { display: none; } /* Make the tbody a grid layout */ tbody { display: grid; grid-template-columns: 33% 33% 33%; box-sizing: border-box !important; column-gap: 10px; row-gap: 10px; border: none !important; } /* Bold the dashboard title */ tr td:nth-child(1) { font-weight: bold; } /* Make the card text black in light mode */ tr td:nth-child(2) { color: #000000 !important; } /* Make the card text white in dark mode */ .dashboard-panel[class*="dashboardPanel---pages-dark"] tr td:nth-child(2) { color: #FFFFFF !important; } /* Hide the 3rd (app) and 4th (title) columns */ tr td:nth-child(3), tr td:nth-child(4) { display: none; } /* Turn the table rows into cards */ tbody tr { border-radius: 5px; border: 1px solid #999999; padding: 10px; } tbody tr, tbody tr td { box-sizing: border-box !important; display: block; background: #FAFAFA !important; } .dashboard-panel[class*="dashboardPanel---pages-dark"] tbody tr, .dashboard-panel[class*="dashboardPanel---pages-dark"] tbody tr td { background: #31373B !important; } .table td { padding-top: 0 !important; padding-bottom: 0 !important; } </style> </html> </panel> </row> </dashboard>  
Hi @Na_Kang_Lim  Based on the lookup name, It sounds like you are on TrackMe V1, have you considered upgrading to V2.x? There are a bunch of bug fixes which could be impacting your issue here, and n... See more...
Hi @Na_Kang_Lim  Based on the lookup name, It sounds like you are on TrackMe V1, have you considered upgrading to V2.x? There are a bunch of bug fixes which could be impacting your issue here, and new features you can use in the later versions. @asimit Im intrigued by the "trackme_max_data_tracker_history" macro - where can I find this? I cant see it in my installation of TrackMe (V1 or V2)! Also the REST endpoint you doesnt seem to exist. 
Hi @token2  I disagree with some of the information in the markdown posted on the other post, specifically around the API usage (" Gathers performance data through the vCenter API") - This is not co... See more...
Hi @token2  I disagree with some of the information in the markdown posted on the other post, specifically around the API usage (" Gathers performance data through the vCenter API") - This is not correct, neither of the apps mentioned connect to the API, the vCenter app uses syslog+monitor inputs (file monitoring) to pick up events, the ESXi app is purely syslog. The Splunk_TA_vcenter (Splunk Add-on for vCenter Log) should be installed on a Splunk Universal Forwarder running on the vCenter Server host, so it can monitor vCenter log files directly from the filesystem. This takes vCenter logs only, which last time I checked didnt seem to have the individual ESXi logs. The Splunk Add-on for VMware ESXi Logs should be installed on a Splunk forwarder or heavy forwarder that is receiving syslog data from the ESXi hosts, if you install this on the same host as the vCenter app then ensure you use a unique syslog port for this so the sourcetype field extractions can work correctly. If you want performance info/metrics etc then you need "Splunk Add-on for VMware Metrics": The Splunk Add-on for VMware Metrics is a collection of add-ons used to collect and transform the Performance, Inventory, Tasks, and Events data from VMware vCenters, ESXi hosts, and virtual machines. The Splunk Add-on for VMware Metrics contains the following components: Splunk_TA_vmware_inframon - Runs a Python-based API data collection engine, collects data from VMware vSphere environment, and performs field extractions for VMware data. SA-Hydra-inframon. Depending on your usecase you might prefer to use all, or a specific subset, of the many VMware apps available! Please let me know if you want further clarity on any of these and feel free to share your usecases so we can help refine which apps might benefit you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @asah  No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/U... See more...
Hi @asah  No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/Universal Forwarders. *HOWEVER* The Splunk Add-on for the OpenTelemetry Collector can be deployed to a Splunk Forwarder (UF/HF) via a Deployment Server, and this app aims to solve this issue and actually allow management of Otel via the DS.  By deploying the Splunk Distribution of the OpenTelemetry Collector as an add-on, customers wishing to expand to Observability can do so more easily, by taking advantage of existing tooling and know-how about using Splunk Deployment Server or other tools to manage Technical Add-Ons and .conf files. You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any technical add-on  Check out this blog post for more info: https://www.splunk.com/en_us/blog/devops/announcing-the-splunk-add-on-for-opentelemetry-collector.html And also this page on how to configure it: https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-addon/collector-addon-configure-instance.html So in short, whilst you cant manage your existing K8s deployment of Otel, you could switch to using UFs which connect back to your DS and pull their config from there, if you are willing to switch out to a UF...but then if you're going to install a UF to manage Otel, you might as well send the logs via the UF to Splunk Cloud?! (Unless there is another reason you need/want Otel, such as instrumentation).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing Since you're already testing the Otel collector on your K8s cluster I assume you've already sorted out that side of the deployment process but incase its of any help there are some docs  at https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/collector-linux-intro.html#collector-linux-intro and https://docs.splunk.com/observability/en/gdi/opentelemetry/deployment-modes.html which may be useful.  Regarding Splunk Add-on for the OpenTelemetry Collector, this 
Hey @asimit  Out of interest, what LLM are you using to generates these responses?  By the way, half of the links you posted are hallucinations.  
I encountered nearly identical symptoms on one of my RHEL9 systems. The key errors you mentioned: "splunkd is not running. failed splunkd.pid doesn't exist" "tcp_conn_open_afux ossocket_connect fa... See more...
I encountered nearly identical symptoms on one of my RHEL9 systems. The key errors you mentioned: "splunkd is not running. failed splunkd.pid doesn't exist" "tcp_conn_open_afux ossocket_connect failed with no such file or directory" Forwarder showing as "inactive" despite correct configuration These are classic signs of what I discovered was a "split brain" situation with duplicate systemd service files. While your reinstall fixed it (likely by cleaning up these duplicate files), others might benefit from understanding the root cause: Check for duplicate service files: ls -la /usr/lib/systemd/system/SplunkForwarder.servicels -la /etc/systemd/system/SplunkForwarder.service If both exist, that's the problem! The one in /etc/systemd/system/ takes precedence and might have different user/permissions settings. In my case, one was set to run as SplunkFwd user while the other was running as root. This causes a situation where: Systemd shows SplunkForwarder running Splunk CLI thinks it's not running Permission conflicts prevent proper operation PID file issues occur Connection failures happen despite proper network connectivity The fix is simpler than reinstalling: sudo rm /etc/systemd/system/SplunkForwarder.servicesudo systemctl daemon-reloadsudo systemctl restart SplunkForwarder This can happen when multiple installation methods are used (like RPM install + splunk enable boot-start). Sharing this because my "nuke and pave" initially didn't work either until I discovered this specific issue.  Being in the DoD air-gap hell made this even harder to troubleshoot!
Hi @Na_Kang_Lim, The TrackMe App has a default data retention/tracking period setting that's likely causing you to only see hosts with data_last_time_seen values after 04/03. This is controlled... See more...
Hi @Na_Kang_Lim, The TrackMe App has a default data retention/tracking period setting that's likely causing you to only see hosts with data_last_time_seen values after 04/03. This is controlled by the "trackme_max_data_tracker_history" macro which defaults to 30 days of history retention. You can modify this setting to include older hosts: a. Navigate to Settings > Advanced Search > Search macros b. Find the macro "trackme_max_data_tracker_history" c. By default, it's set to "-30d" which means it only tracks 30 days of history d. Modify this value to extend the period (e.g., "-60d" or "-90d") e. Save your changes and wait for the next scheduled collection to run Additionally, check these potential causes: a. The TrackMe collection might have been reset or reinstalled around 04/03 b. There might be a tracker purge operation scheduled that removes older entries c. The "Data Sampling" feature might be enabled with a custom time range You can verify current settings with: ``` | rest /servicesNS/-/trackme/configs/conf-trackme/trackme_gen_settings | table title value ``` If you need to force TrackMe to rebuild its tracking data: ``` | outputlookup trackme_host_monitoring ``` Please give for support happly splunking ....
Hi @MsF-2000,   The issue you're experiencing with Dashboard Studio PDF exports showing empty panels is likely due to timing problems. When exporting dashboards to PDF, Splunk uses Chromium which h... See more...
Hi @MsF-2000,   The issue you're experiencing with Dashboard Studio PDF exports showing empty panels is likely due to timing problems. When exporting dashboards to PDF, Splunk uses Chromium which has default timeouts that may not allow enough time for your searches to complete. Here are solutions to try: 1. Adjust the Chromium timeout settings in limits.conf: A- Increase `render_chromium_timeout` (default is 30 seconds) B- Set `render_chromium_screenshot_delay` to allow more time for searches to complete 2. Optimize your underlying searches: A- Ensure searches use acceleration features when possible B- Consider using report acceleration or summary indexing for complex queries C- Check if any individual panels take longer than 30 seconds to load 3. Try scheduled PDF delivery instead of on-demand export: A- Configure scheduled reports with PDF delivery B- These often provide more time for search completion than interactive exports 4. For Dashboard Studio specifically: A- Avoid using global inputs when possible as they can delay rendering B- Consider converting complex panels to Simple XML if persistent problems occur If modifying limits.conf settings, contact Splunk Cloud Support as they'll need to make these changes for you. Has anyone checked how long the dashboard takes to fully load all data when viewed in the browser?
Hi @dineshchoudhary, When your "Ignored Messages" configuration isn't working to filter out EventLogger errors in AppDynamics, you need a more comprehensive approach. Here are several methods to... See more...
Hi @dineshchoudhary, When your "Ignored Messages" configuration isn't working to filter out EventLogger errors in AppDynamics, you need a more comprehensive approach. Here are several methods to stop those noisy events: ## Method 1: Correct Configuration of Error Detection Rules The way you've configured the ignored message might not be matching correctly. Try these variations: 1. Use wildcards in your ignore pattern: a. Use "xxxx.Logs.EventLogger*" instead of the exact message b. Try "*EventLogger*Error*" to match more broadly c. Make sure there are no hidden characters in your pattern 2. Check case sensitivity: a. AppDynamics error matching can be case-sensitive b. Try matching with both uppercase and lowercase variations ## Method 2: Agent Configuration File Approach If the UI-based configuration isn't working, modify the agent config file directly: 1. Locate your agent configuration file: a. For .NET agent: Look for "config.xml" file in the installation directory b. Typically found in "C:\ProgramData\AppDynamics\DotNetAgent\Config" 2. Add the error exclusion in the configuration file: ```xml <configuration> <errorDetection> <ignoredExceptions> <exception> <exceptionType>xxxx.Logs.EventLogger</exceptionType> <exceptionMessage>Error Message: Error</exceptionMessage> </exception> </ignoredExceptions> </errorDetection> </configuration> ``` 3. Restart the application and the AppDynamics agent after making changes ## Method 3: Code-Based Solution If you have access to the application code, consider modifying the logging implementation: 1. In your .NET application, modify the EventLogger class: a. Add logic to prevent reporting these specific errors to AppDynamics b. Use AppDynamics API to exclude certain log messages ```csharp // Example code to prevent AppD from capturing certain errors try { // Your code that might log errors } catch (Exception ex) { // Use AppDynamics API to mark as "do not report" AppDynamics.Agent.Transaction.GetCorrelatingTransaction().DoNotReportAsError(); // Continue with your regular logging EventLogger.LogError("Error Message: Error"); } ``` ## Method 4: Application Tier Settings You might need to adjust settings at the tier level: 1. Navigate to your application configuration: a. Open AppDynamics Controller UI b. Go to Applications > [Your Application] > Configuration > Error Detection 2. Check both "Errors to Ignore" and "Custom Error Detection" sections: a. Add your pattern in both locations b. Use "Contains" match criteria rather than "Equals" 3. Apply changes to the specific tier where your .NET application is running If these approaches don't resolve the issue, the error messages might be coming from a different source than expected or there might be a configuration sync issue between your agent and the controller. Please give for support & Mark as solution happly splunking ....
My situation is quite simple. I lack the basic training. The ChatGPT showed me the way. Maybe this can be an answer for another Splunk report newbie super user. ”Ah, so you want to deduplicate data ... See more...
My situation is quite simple. I lack the basic training. The ChatGPT showed me the way. Maybe this can be an answer for another Splunk report newbie super user. ”Ah, so you want to deduplicate data and use it in a Pivot table – great clarification! Pivot in Splunk is based on Data Models, and indeed, there are limitations on SPL commands (like | dedup _raw) in that context.     The Problem:     Pivot uses a Data Model, and in the base search of a Data Model, you cannot use pipe (|) commands like dedup.     Your Goal:     Remove duplicates based on _raw and still use the data in a Pivot table.         Solution Options for Use in Pivot:         1. Create a Saved Search with dedup, then build a Data Model on top of it     This is the recommended method:     Step 1: Create a Saved Search     Go to Splunk’s Search view. Enter the SPL:   index=your_index sourcetype=your_sourcetype | dedup _raw     Click Save As → Report. Name it, for example: Deduped Raw Events.       Step 2: Create a new Data Model based on that Saved Search     Go to Settings → Data Models → New Data Model. Give it a name and save it. Add a new Object, and choose: Object Type: Event Constraint: Use the saved search:     savedsearch="Deduped Raw Events"       NOTE: savedsearch="your_report_name" references the saved search.         Step 3: Use Pivot on top of this Data Model     Go to Pivot → Select your new Data Model → Deduped Raw Events. Build your table as desired.           Notes:     This only works if the saved search is public (shared) or you have permission to use it. The Saved Search must return fields that you can use in Pivot (like _time, host, source, custom fields, etc.).           🧪 Option 2: Simulate Dedup within the Data Model (if possible)     Data Models do not allow | dedup, but you can:   Add an auto-extracted field, which lets you group by that field in Pivot. Or, if you have a unique identifier (e.g., event_id), you can use first-value or latest-value aggregations in Pivot to simulate deduplication.           Summary:   Method Dedup Allowed? Usable in Pivot? Saved Search + dedup Native Data Model search SPL with pipes in Pivot UI (not allowed) but very limited       If you’d like, I can also help you write the full search or configure it for a specific type of data or log source – just let me know what you’re using it for in Pivot!”   The data is very simple event log type data. The amount of data is small. There is a unique field in log lines (event id). The question was about how to tweak existing data set. Splunk is not good for these type of business reports which should be moved to another report platform (ie MSBI).
Hi @molla  This text wrapping change is indeed a behavior change in Splunk Dashboard Studio in version 9.4.2. It's part of Splunk's ongoing improvements to make dashboards more responsive and re... See more...
Hi @molla  This text wrapping change is indeed a behavior change in Splunk Dashboard Studio in version 9.4.2. It's part of Splunk's ongoing improvements to make dashboards more responsive and readable across different screen sizes, but I understand it may not be what you're looking for in your specific dashboards. To revert to the previous behavior where text is cut off with ellipses rather than wrapped, you have a few options: ## For Text Visualization Elements: 1. Use CSS to override the wrapping behavior: a. Click on the text element in Dashboard Studio b. Go to the Style tab c. Click on "Advanced" at the bottom d. Add the following CSS: ``` .wrapped-text { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } ``` e. Apply the custom CSS class to your text element 2. Set a fixed width for the text container: a. Select the containing element (panel or section) b. Set a specific width (rather than responsive/percentage-based) c. This will cause text to behave more predictably ## For Table Elements: 1. Modify cell behavior: a. Select your table visualization b. Click on the "Format" tab c. Find the specific column you want to modify d. Click "Cell" settings e. Under "Text Overflow", select "Clip" or "Ellipsis" instead of "Wrap" 2. Apply custom CSS to table cells: a. Select the table b. Go to the Style tab c. Click on "Advanced" d. Add the following CSS: ``` td.splunk-table-cell { white-space: nowrap !important; overflow: hidden !important; text-overflow: ellipsis !important; } ``` ## For Single Value Visualizations: 1. Adjust truncation settings: a. Select the single value visualization b. Go to the Format tab c. Find the "Truncate" option d. Set it to "End" and specify the desired character limit If these methods don't work for your specific dashboard elements, you may need to provide more details about which specific visualization types are showing the unwanted wrapping behavior. Note that this is a UI/rendering change in 9.4.2, so there's no global setting to revert all text handling to the previous version's behavior. Each element needs to be adjusted individually. Please give for support happly splunking ....
Hi @asadnafees138  When backing up Splunk logs to AWS S3, the format depends on which method you use for the backup. Here are the available methods and their corresponding formats: ## Ingest Ac... See more...
Hi @asadnafees138  When backing up Splunk logs to AWS S3, the format depends on which method you use for the backup. Here are the available methods and their corresponding formats: ## Ingest Actions If you're using Ingest Actions to send data to S3, you have three format options: 1. Raw format a. This preserves the original format of your data exactly as it was ingested b. Best for maintaining complete fidelity with source data c. May require additional parsing when retrieving 2. Newline-delimited JSON (ndjson) a. Each event is a separate JSON object on a new line b. Includes both the raw event and extracted fields c. Best option if you plan to use Federated Search with S3 later d. Easily parseable by many analytics tools 3. JSON format a. Standard JSON format with events in an array structure b. Includes metadata and extracted fields c. Good for interoperability with other systems ## Edge Processor and Ingest Processor If using Edge Processor or Ingest Processor: 1. JSON format a. Default format b. Structured in HTTP Event Collector (HEC) compatible format c. Includes event data and metadata 2. Parquet format (Ingest Processor only) a. Columnar storage format b. Offers better compression and query performance c. Excellent for analytical workloads d. Supported by many big data tools ## Frozen Data Archive If archiving frozen buckets to S3: 1. Splunk proprietary format a. Data stored in Splunk's internal format (tsidx and raw files) b. Requires Splunk to read and interpret c. Best for data you might want to thaw back into Splunk later 2. Custom format (with cold2frozen scripts) a. You can customize how data is exported using scripts b. Can transform to various formats including CSV, JSON, etc. ## Recommendations Based on your needs for future retrieval and readability: 1. If you need the data to be easily readable by other systems: a. Use Ingest Actions with ndjson format b. Or Ingest Processor with Parquet format for analytical workloads 2. If you might want to re-ingest into Splunk: a. ndjson format is easiest for re-ingestion b. Frozen bucket archives can be thawed but only within Splunk 3. If storage efficiency is a priority: a. Parquet format (via Ingest Processor) offers the best compression b. ndjson is a good balance between readability and size For comprehensive documentation, refer to: Ingest Actions: https://docs.splunk.com/Documentation/SplunkCloud/latest/IngestActions/S3Destination Edge Processor: https://docs.splunk.com/Documentation/SplunkCloud/latest/EdgeProcessor/AmazonS3Destination Ingest Processor: https://docs.splunk.com/Documentation/SplunkCloud/latest/IngestProcessor/AmazonS3Destin Data Self-Storage: https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/DataSelfStorage Please give for support happly splunking ....
Hi @Uday, There are several approaches to create a server status dashboard in Splunk when you don't have explicit "server up/down" logs. Here are the most effective methods: ## Method 1: Check ... See more...
Hi @Uday, There are several approaches to create a server status dashboard in Splunk when you don't have explicit "server up/down" logs. Here are the most effective methods: ## Method 1: Check for Recent Log Activity This is the simplest approach - if a server is sending logs, it's probably up: ``` | metadata type=hosts index=* | search host=* | eval lastTime=strftime(recentTime,"%Y-%m-%d %H:%M:%S") | eval status=if(now()-recentTime < 600, "UP", "DOWN") | table host lastTime status | sort host ``` Customize the time threshold (600 seconds = 10 minutes) based on your expected log frequency. ## Method 2: Using Rangemap for Visualization Use rangemap to assign colors to status values: ``` | metadata type=hosts index=* | search host=* | eval lastTime=strftime(recentTime,"%Y-%m-%d %H:%M:%S") | eval seconds_since_last_log=now()-recentTime | eval status=if(seconds_since_last_log < 600, "UP", "DOWN") | rangemap field=status up="0-0" down="1-1" | table host lastTime status range | sort host ``` For dashboard visualization, you'll need to add: 1. A CSS file (table_decorations.css) with content: ```css .severe { background-color: #dc4e41 !important; color: white !important; } .low { background-color: #65a637 !important; color: white !important; } ``` 2. A JavaScript file (table_icons_rangemap.js) with content: ```javascript require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'range'; }, render: function($td, cell) { var value = cell.value; if(value === "severe") { $td.addClass('severe'); $td.html('Down'); } else if(value === "low") { $td.addClass('low'); $td.html('Up'); } return $td; } }); mvc.Components.get('table1').getVisualization(function(tableView) { tableView.addCellRenderer(new CustomRangeRenderer()); tableView.render(); }); }); ``` 3. Dashboard XML that includes these files: ```xml <form script="table_icons_rangemap.js" stylesheet="table_decorations.css"> <label>Server Status Dashboard</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table id="table1"> <search> <query>| metadata type=hosts index=* | search host=* | eval lastTime=strftime(recentTime,"%Y-%m-%d %H:%M:%S") | eval seconds_since_last_log=now()-recentTime | eval status=if(seconds_since_last_log < 600, "UP", "DOWN") | rangemap field=status up="0-0" down="1-1" | table host lastTime status range | sort host</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> ``` ## Method 3: Include All Expected Servers To also show servers that aren't sending logs at all, use a lookup with all expected servers: ``` | inputlookup your_servers.csv | append [| metadata type=hosts index=*] | stats max(recentTime) as recentTime by host | eval lastTime=if(isnotnull(recentTime),strftime(recentTime,"%Y-%m-%d %H:%M:%S"),"Never") | eval seconds_since_last_log=if(isnotnull(recentTime),now()-recentTime,999999) | eval status=if(seconds_since_last_log < 600, "UP", "DOWN") | rangemap field=status up="0-0" down="1-1" | table host lastTime status range | sort host ``` ## Method 4: Advanced Server Status Check (Recommended for Critical Systems) If exact server status is critical, create a scheduled search that sends heartbeats from each server and alerts when they're missing: 1. Create a small script on each server that sends a heartbeat every few minutes: ``` index=server_status sourcetype=heartbeat host=$HOSTNAME$ status=ALIVE ``` 2. Then use this search for your dashboard: ``` | inputlookup your_servers.csv | map search="search earliest=-10m latest=now index=server_status sourcetype=heartbeat host=$host$ | head 1 | fields host" | fillnull value="DOWN" status | eval status=if(host=="NULL","DOWN","UP") | rangemap field=status up="0-0" down="1-1" | table host status range ``` This solution is more accurate than just checking for any logs, as it specifically monitors for heartbeat messages. Remember to place your CSS and JS files in the /appserver/static/ directory of your app, and restart Splunk after adding them. Please give for support happly splunking ....
Hi @genesiusj, Based on your description, you're dealing with a time series forecasting problem where you want to predict future user access patterns on Sundays. For this type of scenario in MLT... See more...
Hi @genesiusj, Based on your description, you're dealing with a time series forecasting problem where you want to predict future user access patterns on Sundays. For this type of scenario in MLTK, I would recommend the following algorithms: ## Recommended Algorithms 1. Prophet a. Excellent for time series data with strong seasonal patterns (like your Sunday-only data) b. Handles missing values well, which is useful since many users may have zero counts on certain days c. Can capture multiple seasonal patterns (weekly, monthly, yearly) d. Works well when you have 6 months of historical data 2. ARIMA (AutoRegressive Integrated Moving Average) a. Good for detecting patterns and generating forecasts based on historical values b. Works well for data that shows trends over time c. Can handle seasonal patterns with the seasonal variant (SARIMA) d. Requires stationary data (you might need to difference your time series) ## Implementation Approach For your specific use case with 1000 users, I would recommend using a separate model for each user who has sufficient historical data. Here's how you could implement this with Prophet: ``` | inputlookup your_lookup.csv | where DATE >= "2020-01-05" AND DATE <= "2020-06-28" | rename DATE as ds, COUNT as y | fit Prophet future_timespan=26 from ds y by USER | where isnull(y) | eval date_str=strftime(ds, "%Y-%m-%d") | rename ds as DATE | fields DATE USER yhat yhat_lower yhat_upper | eval predicted_count = round(yhat) | fields DATE USER predicted_count ``` For comparison with actual values: ``` | inputlookup your_lookup.csv | where DATE >= "2020-07-05" AND DATE <= "2020-12-27" | join type=left USER DATE [| inputlookup your_lookup.csv | where DATE >= "2020-01-05" AND DATE <= "2020-06-28" | rename DATE as ds, COUNT as y | fit Prophet future_timespan=26 from ds y by USER | where isnull(y) | eval DATE=strftime(ds, "%Y-%m-%d") | fields DATE USER yhat | eval predicted_count = round(yhat)] | rename COUNT as actual_count | eval error = abs(actual_count - predicted_count) | eval error_percentage = if(actual_count=0, if(predicted_count=0, 0, 100), round((error/actual_count)*100, 2)) ``` ## Handling Your Data Structure Since you have 1000 users and 52 Sundays, I have a few recommendations for improving your forecasting: 1. Focus first on users with non-zero access patterns a. Many users might have sparse or no access attempts, which can result in poor models b. Consider filtering to users who accessed the system at least N times during the training period 2. Consider feature engineering a. Add month and quarter features to help the model capture broader seasonal patterns b. Include special event indicators if certain Sundays might have unusual patterns (holidays, etc.) c. You might want to include a lag feature (access count from previous Sunday) 3. Model evaluation a. Compare MAPE (Mean Absolute Percentage Error) across different users and algorithms b. For users with sparse access patterns, consider MAE (Mean Absolute Error) instead c. Establish a baseline model (like average access count per Sunday) to compare against 4. Alternative approach for sparse data a. For users with very sparse access patterns, consider binary classification b. Predict whether a user will attempt access (yes/no) rather than count c. Use algorithms like Logistic Regression or Random Forest for this approach Hope this helps point you in the right direction! With 6 months of training data focused on weekly patterns, Prophet is likely your best starting point. Please give for support happly splunking ....
How big and complex your dataset is and how much its content is changing? And how long time span it covers?