All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Of course @livehybrid Thanks for your comment
@Trevorator  After the initial creation of data model acceleration summaries, Splunk regularly runs scheduled summarization searches to incorporate new data and remove information that is older th... See more...
@Trevorator  After the initial creation of data model acceleration summaries, Splunk regularly runs scheduled summarization searches to incorporate new data and remove information that is older than the defined summary range. If a summarization search exceeds the Max Summarization Search Time limit, it is stopped before completing its assigned interval. Normally, Splunk does not automatically retry or continue the interrupted summarization for that specific time window, which can result in gaps in your accelerated data if summarization searches repeatedly fail or time out. These gaps mean that some events will not be included in the .tsidx summary files, causing searches that rely on tstats summariesonly=true to miss those events I would say best approach is to address the resource constraints causing your summarization searches to run too long. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
As @bowesmana points out, join is not the correct approach.  @livehybrid gives a logic that implements your requirements.  But the implementation inherits some convoluted logic in your original attem... See more...
As @bowesmana points out, join is not the correct approach.  @livehybrid gives a logic that implements your requirements.  But the implementation inherits some convoluted logic in your original attempt. (The use of tstats requires that field System_Name is the same as host or is otherwise extracted at index time.  But your original SPL seems to imply the opposite.  I will not make such assumption below.) Your original expression | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") really just says a System contained in system_info.csv is MISSING if no matching System_Name appears in sourcetype log, or a matching System_name appears in sourcetype log but is more than 2 hours ago. Is this correct?  There should be no need to even evaluate last_seen_ago_in_seconds if you simply filter search with earliest=-2h which is more efficient, too.  Additionally, the field MISSING is unnecessary in the table because it will always have value "MISSING" according to your logic. Here is a much simplified logic: index=servers sourcetype=logs earliest=-2h | stats latest(_time) as Time by System_Name sourcetype | append [inputlookup system_info.csv | fields System Location Responsible ``` ^^^ only necessary if there are more than these three fields ``` | rename System as System_Name] | stats values(sourcetype) as _last2hours values(Location) as Location values(Responsible) as Responsible by System_Name | where isnull(_last2hours)  
Hi @L_Petch  SC4S (Splunk Connect for Syslog) is purpose-built for syslog ingestion, offering features like automatic source categorization, syslog protocol handling, and Splunk CIM compatibility. N... See more...
Hi @L_Petch  SC4S (Splunk Connect for Syslog) is purpose-built for syslog ingestion, offering features like automatic source categorization, syslog protocol handling, and Splunk CIM compatibility. NGINX is a general-purpose reverse proxy and load balancer, not a syslog server, so using NGINX for syslog forwarding requires extra configuration and lacks SC4S's syslog-specific features. SC4S is preferred for syslog ingestion into Splunk due to its built-in parsing, normalization, and Splunk integration. For keepalived with Podman, simply tracking the Podman process may not reliably detect if the SC4S container is healthy or running. Instead, use a custom health check script in keepalived that verifies the SC4S container is running and listening on the expected syslog port (e.g., using podman ps and ss or nc to check port status). This ensures the VIP only fails over when SC4S is truly unavailable.   # Example keepalived health check script #!/bin/bash # Check if SC4S container is running podman ps --filter "name=sc4s" --filter "status=running" | grep sc4s > /dev/null 2>&1 || exit 1 # Check if syslog port (e.g., 514) is listening ss -ltn | grep ':514 ' > /dev/null 2>&1 || exit 1 exit 0   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Trevorator  What are your acceleration.backfill_time and acceleration.earliest_time set to? Reducing acceleration.max_time from 3600 seconds to 1800 seconds is unlikely to be a solution and may... See more...
Hi @Trevorator  What are your acceleration.backfill_time and acceleration.earliest_time set to? Reducing acceleration.max_time from 3600 seconds to 1800 seconds is unlikely to be a solution and may worsen the problem. If a summarization search requires, for example, 2000 seconds to process its assigned time range due to resource constraints, it would complete with a 3600-second timeout but would fail with an 1800-second timeout. This would lead to more frequent timeouts and potentially larger gaps in your accelerated data. I think the best option is to determine why the search is taking so long to run, is the DM restricted to your only your required set of indexes?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thats great, I'm glad you managed to get it resolved. If my previous reply helped at all then please consider adding karma to the reply.  Thanks
It shows that you think like SQL.  The API version in your examples is the easiest to extract in Splunk. (In fact, in any modern language other than SQL.) And using case function is about the most co... See more...
It shows that you think like SQL.  The API version in your examples is the easiest to extract in Splunk. (In fact, in any modern language other than SQL.) And using case function is about the most complicated method.  @livehybrid and @richgalloway suggested regex.  There is an even simpler and perhaps cheaper method: | eval version = mvindex(split(API_RESOURCE, "/"), 1)  
You can write dashboards to "edit" lookup tables, but it involves the use of inputlookup and outputlookup to update/add/delete items from the lookup. It's a bit involved but involves setting and clea... See more...
You can write dashboards to "edit" lookup tables, but it involves the use of inputlookup and outputlookup to update/add/delete items from the lookup. It's a bit involved but involves setting and clearing tokens that allow the searches to run and using some kind of key to identify each row of the table for updates and deletes. Typically add would | inputlookup your_table.csv | append [ | makeresults | eval ... set your fields here from dashboard token form inputs ] | outputlookup your_table.csv Update would | inputlookup your_table.csv | eval field1=if(this_row=row_to_update, new_field1, old_field1) ... for each field | outputlookup your_table.csv and delete would | inputlookup your_table.csv | where event!=event_to_delete | outputlookup your_table.csv  We use a small piece of JS to implement buttons for the "commit" part of the form input. It's a bit of a fiddly dashboard, but it's possible - we use it a lot.  
Your sed command is wrong for your example data. The space is before the colon in your example. but your sed is replacing the space after the colon. Your example data as posted seems to have two spa... See more...
Your sed command is wrong for your example data. The space is before the colon in your example. but your sed is replacing the space after the colon. Your example data as posted seems to have two spaces before the colon, at least if I copy/paste your data there are two spaces. Note also that you could do the fixup and strptime once, e.g. | makeresults format=csv data="ClintReqRcvdTime Wed 4 Jun 2025 17:16:02 :161 EDT Mon 2 Jun 2025 02:52:50 :298 EDT Mon 9 Jun 2025 16:11:05 :860 EDT" ``` This is what you want - above is just constructing an example dataset ``` | eval t=strptime(replace(ClintReqRcvdTime, "\s*:\s*", ":"), "%a %d %b %Y %H:%M:%S:%Q %Z") | eval date_only=strftime(t, "%m/%d/%Y") | eval year_only=strftime(t, "%Y") | eval month_only=strftime(t, "%b") | eval week_only=floor(tonumber(strftime(t, "%d"))/7+1)
in splunkd.log , of HF2 I could see these  06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_host_thruput, ingest_pipe=2, series="lmpsplablr001", kbps=4.247, eps=24.613, kb=131.668, ev=763, a... See more...
in splunkd.log , of HF2 I could see these  06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_host_thruput, ingest_pipe=2, series="lmpsplablr001", kbps=4.247, eps=24.613, kb=131.668, ev=763, avg_age=2.279, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_index_thruput, ingest_pipe=2, series="_internal", kbps=2.206, eps=13.032, kb=68.396, ev=404, avg_age=2.233, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_index_thruput, ingest_pipe=2, series="_metrics", kbps=2.041, eps=11.581, kb=63.272, ev=359, avg_age=2.331, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/audit.log", kbps=0.000, eps=0.032, kb=0.000, ev=1, avg_age=0.000, max_age=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/metrics.log", kbps=4.082, eps=23.355, kb=126.545, ev=724, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/splunkd_access.log", kbps=0.165, eps=1.226, kb=5.123, ev=38, avg_age=1.711, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunk_audit", kbps=0.000, eps=0.032, kb=0.000, ev=1, avg_age=0.000, max_age=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunk_metrics_log", kbps=2.041, eps=11.677, kb=63.272, ev=362, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunkd", kbps=2.041, eps=11.677, kb=63.272, ev=362, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunkd_access", kbps=0.165, eps=1.226, kb=5.123, ev=38, avg_age=1.711, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=tcpout_my_syslog_group, max_size=512000, current_size=0, largest_size=0, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=tcpout_primary_indexers, max_size=512000, current_size=0, largest_size=219966, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=aggqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=260, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=indexqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=8, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=nullqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=parsingqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=syslog_system, max_size_kb=97, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=syslog_system2, max_size_kb=97, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=typingqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=257, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=syslog_connections, ingest_pipe=2, syslog_system2:x.x.x.x:514:x.x.x.x:514, sourcePort=8089, destIp=x.x.x.x, destPort=514, _tcp_Bps=0.00, _tcp_KBps=0.00, _tcp_avg_thruput=3.71, _tcp_Kprocessed=2266, _tcp_eps=0.00 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=syslog_connections, ingest_pipe=2, syslog_system:y.y.y.y:514:y.y.y.y:514, sourcePort=8089, destIp=y.y.y.y, destPort=514, _tcp_Bps=0.00, _tcp_KBps=0.00, _tcp_avg_thruput=0.35, _tcp_Kprocessed=213, _tcp_eps=0.00 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=tcpout_connections, ingest_pipe=2, name=primary_indexers:z.z.z.z0:9997:0:0, sourcePort=8089, destIp=z.z.z.z0, destPort=9997, _tcp_Bps=1545.33, _tcp_KBps=1.51, _tcp_avg_thruput=1.51, _tcp_Kprocessed=45, _tcp_eps=0.50, kb=44.94 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=thruput, ingest_pipe=2, name=index_thruput, instantaneous_kbps=0.000, instantaneous_eps=0.000, average_kbps=0.000, total_k_processed=0.000, kb=0.000, ev=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=thruput, ingest_pipe=2, name=thruput, instantaneous_kbps=4.505, instantaneous_eps=26.000, average_kbps=10.456, total_k_processed=6705.000, kb=139.648, ev=806, load_average=1.500   any abnormalities in this entries ? I suspect that issue is with HF 2 only .. when HF 2 stopped , everyother things works fine ..if HF2 service initiated, it start utilize 1GB to 50GB of memory only  out of 130GB , then the HF1 start use memory and log ingestion getting stopped .. especially from syslog server ( large log volume input) -> this index getting affected first  Hence increasing memory in HF 2 not helpful here
After further investigation, the file created from SplunkUI is located in the /etc/apps directory and must be moved to /opt/splunk/etc/apps/ so that it can be accessed from the Splunk browser. ... See more...
After further investigation, the file created from SplunkUI is located in the /etc/apps directory and must be moved to /opt/splunk/etc/apps/ so that it can be accessed from the Splunk browser.  
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached ... See more...
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached the 'yarn start' stage, everything went smoothly. However, when I restarted Splunk to see the results, I couldn't see any changes from before. Does anyone know about this issue?  
@bspalding  Use initCrcLength if your files are extremely similar at the start and the UF is getting confused Eg: Note-Change initCrcLength value based on your similar header size [monitor://E:\p... See more...
@bspalding  Use initCrcLength if your files are extremely similar at the start and the UF is getting confused Eg: Note-Change initCrcLength value based on your similar header size [monitor://E:\path\logfile*.log] disabled = 0 initCrcLength = 256 crcSalt = <UNIQUESOURCE> index = XXXX sourcetype = XXXX _meta = env::prod-new Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Fix the yarn version, install yarn correctly  sudo npm install -g yarn@1.22.19  
My bad, the solution is insatll yarn correctly. my yarn is wrong installation
Do the files have a common header?  If so, you may need to set initCrcLength to a value larger than the header.
And if you want the full version number after . then try: | rex field=API_RESOURCE "^/(?<version>v[\d\.]+)\/"  Did this answer help you? If so, please consider: Adding karma to show it was usef... See more...
And if you want the full version number after . then try: | rex field=API_RESOURCE "^/(?<version>v[\d\.]+)\/"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ripvw32  Use rex with a regular expression to extract or normalize the version segment efficiently, instead of using multiple LIKE or case statements. For exxample: | makeresults | eval API_RE... See more...
Hi @ripvw32  Use rex with a regular expression to extract or normalize the version segment efficiently, instead of using multiple LIKE or case statements. For exxample: | makeresults | eval API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_request" | append [| makeresults | eval API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_response"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_update"] | append [| makeresults | eval API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_delete"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_delete"] | append [| makeresults | eval API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_update"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/URI_PATH_batch_updates"] | rex field=API_RESOURCE "^/(?<version>v\d+)\." | stats count by version    rex extracts the version (e.g., v63, v62, v61) from the start of API_RESOURCE. - stats count by version groups and counts by the extracted version. - This approach is scalable and requires no manual updates for new versions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you so much for the response!!! That didn't seem to do the trick -  Still seeing entries like this in my table /v63.0/gobbldygook/unrequietededit/describe Also, I am seeing that API_RESO... See more...
Thank you so much for the response!!! That didn't seem to do the trick -  Still seeing entries like this in my table /v63.0/gobbldygook/unrequietededit/describe Also, I am seeing that API_RESOURCE also contains singular words, like "Update", "Delete", "Login" etc, with no v/2digitnumber (didn't see them before as the data is several dozen pages long, at 100 rows per page)
There may be more than one way to do that using regular expressions.  Here's one of them. | rex field=API_RESOURCE "\/v(?<API_RESOURCE>\d+)" Use this command line in place of the existing eval.