All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavor... See more...
Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavors of the the stanza below primarily messing with the TIME_PREFIX to no avail. For every change I make (and a Splunk restart after the fact), Splunk just wants the event in UTC and it is not considering my timezone offset. Does anyone have any suggestions or thoughts at to why I cannot get Splunk to recognize that time properly? Thank you.   {"id": 141865, "summary": "User's password changed", "remoteAddress": "X.X.X.X", "created": "2025-06-12T14:13:19.323+0000", "category": "user management", "eventSource": "", "objectItem": {"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}, "associatedItems": [{"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}]} [my_stanza] TIME_PREFIX = "created": " TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ TZ = UTC
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is ... See more...
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is blank | inputlookup geolocation.csv | eval lat=tonumber(trim(latitude)), lon=tonumber(trim(longitude)) | where isnotnull(lat) AND isnotnull(lon) | table cluster_name lat lon avg_cpu_load avg_mem_load --------------------------------- This is the output I get  cluster_name lat lon avg_cpu_load avg_mem_load ab.com 63.3441204 -8.2673368 96.88 78.55 bc.com 48.9401 62.8346587 55.49 95.49 fg.com 31.5669826 129.9782352 11 19.86
@PrewinThomas  This is what I was worried was the case. You said that "Normally, Splunk does not automatically retry or continue". Does that mean there is a setting that we could enable to have Splu... See more...
@PrewinThomas  This is what I was worried was the case. You said that "Normally, Splunk does not automatically retry or continue". Does that mean there is a setting that we could enable to have Splunk do this to ensure there is no loss in .tsidx files in the short term? The goal is to have all data accelerated for enterprise security searches. I know the long term solution is new machines with better iops but it may be some time before they are requisitioned. 
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a ti... See more...
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a time to save time. But in name field we need to provide Service name (Configuration ID app name) and this will be different for diff config IDs and there will be single index and they will filter based on this name provided. How to on-board them in bulk and how to give naming convention there? Please help me with your inputs.
Hi Team, Could you help me integrating NextDNS (Community App) with Splunk. I have downloaded and configured the app but not able to see logs
why you don't getting the code?
Hi @Trevorator , As @PrewinThomas pointed out, if your acceleration queries exceed the maximum time limit, you should analyze why this happens, in other words, what is your storage performance and w... See more...
Hi @Trevorator , As @PrewinThomas pointed out, if your acceleration queries exceed the maximum time limit, you should analyze why this happens, in other words, what is your storage performance and whether system resources are sufficient. For storage performances, check if the IOPS value of each storage is greater than 800 using an eternal tool like e.g. Bonnie++ and how many CPUs you have in your indexers and Search Heads, you can check this using the Monitoring Console. Ciao. Giuseppe
Thank you @bowesmana !
Of course @livehybrid Thanks for your comment
@Trevorator  After the initial creation of data model acceleration summaries, Splunk regularly runs scheduled summarization searches to incorporate new data and remove information that is older th... See more...
@Trevorator  After the initial creation of data model acceleration summaries, Splunk regularly runs scheduled summarization searches to incorporate new data and remove information that is older than the defined summary range. If a summarization search exceeds the Max Summarization Search Time limit, it is stopped before completing its assigned interval. Normally, Splunk does not automatically retry or continue the interrupted summarization for that specific time window, which can result in gaps in your accelerated data if summarization searches repeatedly fail or time out. These gaps mean that some events will not be included in the .tsidx summary files, causing searches that rely on tstats summariesonly=true to miss those events I would say best approach is to address the resource constraints causing your summarization searches to run too long. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
As @bowesmana points out, join is not the correct approach.  @livehybrid gives a logic that implements your requirements.  But the implementation inherits some convoluted logic in your original attem... See more...
As @bowesmana points out, join is not the correct approach.  @livehybrid gives a logic that implements your requirements.  But the implementation inherits some convoluted logic in your original attempt. (The use of tstats requires that field System_Name is the same as host or is otherwise extracted at index time.  But your original SPL seems to imply the opposite.  I will not make such assumption below.) Your original expression | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") really just says a System contained in system_info.csv is MISSING if no matching System_Name appears in sourcetype log, or a matching System_name appears in sourcetype log but is more than 2 hours ago. Is this correct?  There should be no need to even evaluate last_seen_ago_in_seconds if you simply filter search with earliest=-2h which is more efficient, too.  Additionally, the field MISSING is unnecessary in the table because it will always have value "MISSING" according to your logic. Here is a much simplified logic: index=servers sourcetype=logs earliest=-2h | stats latest(_time) as Time by System_Name sourcetype | append [inputlookup system_info.csv | fields System Location Responsible ``` ^^^ only necessary if there are more than these three fields ``` | rename System as System_Name] | stats values(sourcetype) as _last2hours values(Location) as Location values(Responsible) as Responsible by System_Name | where isnull(_last2hours)  
Hi @L_Petch  SC4S (Splunk Connect for Syslog) is purpose-built for syslog ingestion, offering features like automatic source categorization, syslog protocol handling, and Splunk CIM compatibility. N... See more...
Hi @L_Petch  SC4S (Splunk Connect for Syslog) is purpose-built for syslog ingestion, offering features like automatic source categorization, syslog protocol handling, and Splunk CIM compatibility. NGINX is a general-purpose reverse proxy and load balancer, not a syslog server, so using NGINX for syslog forwarding requires extra configuration and lacks SC4S's syslog-specific features. SC4S is preferred for syslog ingestion into Splunk due to its built-in parsing, normalization, and Splunk integration. For keepalived with Podman, simply tracking the Podman process may not reliably detect if the SC4S container is healthy or running. Instead, use a custom health check script in keepalived that verifies the SC4S container is running and listening on the expected syslog port (e.g., using podman ps and ss or nc to check port status). This ensures the VIP only fails over when SC4S is truly unavailable.   # Example keepalived health check script #!/bin/bash # Check if SC4S container is running podman ps --filter "name=sc4s" --filter "status=running" | grep sc4s > /dev/null 2>&1 || exit 1 # Check if syslog port (e.g., 514) is listening ss -ltn | grep ':514 ' > /dev/null 2>&1 || exit 1 exit 0   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Trevorator  What are your acceleration.backfill_time and acceleration.earliest_time set to? Reducing acceleration.max_time from 3600 seconds to 1800 seconds is unlikely to be a solution and may... See more...
Hi @Trevorator  What are your acceleration.backfill_time and acceleration.earliest_time set to? Reducing acceleration.max_time from 3600 seconds to 1800 seconds is unlikely to be a solution and may worsen the problem. If a summarization search requires, for example, 2000 seconds to process its assigned time range due to resource constraints, it would complete with a 3600-second timeout but would fail with an 1800-second timeout. This would lead to more frequent timeouts and potentially larger gaps in your accelerated data. I think the best option is to determine why the search is taking so long to run, is the DM restricted to your only your required set of indexes?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thats great, I'm glad you managed to get it resolved. If my previous reply helped at all then please consider adding karma to the reply.  Thanks
It shows that you think like SQL.  The API version in your examples is the easiest to extract in Splunk. (In fact, in any modern language other than SQL.) And using case function is about the most co... See more...
It shows that you think like SQL.  The API version in your examples is the easiest to extract in Splunk. (In fact, in any modern language other than SQL.) And using case function is about the most complicated method.  @livehybrid and @richgalloway suggested regex.  There is an even simpler and perhaps cheaper method: | eval version = mvindex(split(API_RESOURCE, "/"), 1)  
You can write dashboards to "edit" lookup tables, but it involves the use of inputlookup and outputlookup to update/add/delete items from the lookup. It's a bit involved but involves setting and clea... See more...
You can write dashboards to "edit" lookup tables, but it involves the use of inputlookup and outputlookup to update/add/delete items from the lookup. It's a bit involved but involves setting and clearing tokens that allow the searches to run and using some kind of key to identify each row of the table for updates and deletes. Typically add would | inputlookup your_table.csv | append [ | makeresults | eval ... set your fields here from dashboard token form inputs ] | outputlookup your_table.csv Update would | inputlookup your_table.csv | eval field1=if(this_row=row_to_update, new_field1, old_field1) ... for each field | outputlookup your_table.csv and delete would | inputlookup your_table.csv | where event!=event_to_delete | outputlookup your_table.csv  We use a small piece of JS to implement buttons for the "commit" part of the form input. It's a bit of a fiddly dashboard, but it's possible - we use it a lot.  
Your sed command is wrong for your example data. The space is before the colon in your example. but your sed is replacing the space after the colon. Your example data as posted seems to have two spa... See more...
Your sed command is wrong for your example data. The space is before the colon in your example. but your sed is replacing the space after the colon. Your example data as posted seems to have two spaces before the colon, at least if I copy/paste your data there are two spaces. Note also that you could do the fixup and strptime once, e.g. | makeresults format=csv data="ClintReqRcvdTime Wed 4 Jun 2025 17:16:02 :161 EDT Mon 2 Jun 2025 02:52:50 :298 EDT Mon 9 Jun 2025 16:11:05 :860 EDT" ``` This is what you want - above is just constructing an example dataset ``` | eval t=strptime(replace(ClintReqRcvdTime, "\s*:\s*", ":"), "%a %d %b %Y %H:%M:%S:%Q %Z") | eval date_only=strftime(t, "%m/%d/%Y") | eval year_only=strftime(t, "%Y") | eval month_only=strftime(t, "%b") | eval week_only=floor(tonumber(strftime(t, "%d"))/7+1)
in splunkd.log , of HF2 I could see these  06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_host_thruput, ingest_pipe=2, series="lmpsplablr001", kbps=4.247, eps=24.613, kb=131.668, ev=763, a... See more...
in splunkd.log , of HF2 I could see these  06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_host_thruput, ingest_pipe=2, series="lmpsplablr001", kbps=4.247, eps=24.613, kb=131.668, ev=763, avg_age=2.279, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_index_thruput, ingest_pipe=2, series="_internal", kbps=2.206, eps=13.032, kb=68.396, ev=404, avg_age=2.233, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_index_thruput, ingest_pipe=2, series="_metrics", kbps=2.041, eps=11.581, kb=63.272, ev=359, avg_age=2.331, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/audit.log", kbps=0.000, eps=0.032, kb=0.000, ev=1, avg_age=0.000, max_age=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/metrics.log", kbps=4.082, eps=23.355, kb=126.545, ev=724, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_source_thruput, ingest_pipe=2, series="/mnt/splunk/splunk/var/log/splunk/splunkd_access.log", kbps=0.165, eps=1.226, kb=5.123, ev=38, avg_age=1.711, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunk_audit", kbps=0.000, eps=0.032, kb=0.000, ev=1, avg_age=0.000, max_age=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunk_metrics_log", kbps=2.041, eps=11.677, kb=63.272, ev=362, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunkd", kbps=2.041, eps=11.677, kb=63.272, ev=362, avg_age=2.312, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=per_sourcetype_thruput, ingest_pipe=2, series="splunkd_access", kbps=0.165, eps=1.226, kb=5.123, ev=38, avg_age=1.711, max_age=3 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=tcpout_my_syslog_group, max_size=512000, current_size=0, largest_size=0, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=tcpout_primary_indexers, max_size=512000, current_size=0, largest_size=219966, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=aggqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=260, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=indexqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=8, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=nullqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=parsingqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=syslog_system, max_size_kb=97, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=syslog_system2, max_size_kb=97, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=queue, ingest_pipe=2, name=typingqueue, max_size_kb=2560000, current_size_kb=0, current_size=0, largest_size=257, smallest_size=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=syslog_connections, ingest_pipe=2, syslog_system2:x.x.x.x:514:x.x.x.x:514, sourcePort=8089, destIp=x.x.x.x, destPort=514, _tcp_Bps=0.00, _tcp_KBps=0.00, _tcp_avg_thruput=3.71, _tcp_Kprocessed=2266, _tcp_eps=0.00 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=syslog_connections, ingest_pipe=2, syslog_system:y.y.y.y:514:y.y.y.y:514, sourcePort=8089, destIp=y.y.y.y, destPort=514, _tcp_Bps=0.00, _tcp_KBps=0.00, _tcp_avg_thruput=0.35, _tcp_Kprocessed=213, _tcp_eps=0.00 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=tcpout_connections, ingest_pipe=2, name=primary_indexers:z.z.z.z0:9997:0:0, sourcePort=8089, destIp=z.z.z.z0, destPort=9997, _tcp_Bps=1545.33, _tcp_KBps=1.51, _tcp_avg_thruput=1.51, _tcp_Kprocessed=45, _tcp_eps=0.50, kb=44.94 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=thruput, ingest_pipe=2, name=index_thruput, instantaneous_kbps=0.000, instantaneous_eps=0.000, average_kbps=0.000, total_k_processed=0.000, kb=0.000, ev=0 06-13-2025 12:34:19.086 +0800 INFO Metrics - group=thruput, ingest_pipe=2, name=thruput, instantaneous_kbps=4.505, instantaneous_eps=26.000, average_kbps=10.456, total_k_processed=6705.000, kb=139.648, ev=806, load_average=1.500   any abnormalities in this entries ? I suspect that issue is with HF 2 only .. when HF 2 stopped , everyother things works fine ..if HF2 service initiated, it start utilize 1GB to 50GB of memory only  out of 130GB , then the HF1 start use memory and log ingestion getting stopped .. especially from syslog server ( large log volume input) -> this index getting affected first  Hence increasing memory in HF 2 not helpful here
After further investigation, the file created from SplunkUI is located in the /etc/apps directory and must be moved to /opt/splunk/etc/apps/ so that it can be accessed from the Splunk browser. ... See more...
After further investigation, the file created from SplunkUI is located in the /etc/apps directory and must be moved to /opt/splunk/etc/apps/ so that it can be accessed from the Splunk browser.  
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached ... See more...
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached the 'yarn start' stage, everything went smoothly. However, when I restarted Splunk to see the results, I couldn't see any changes from before. Does anyone know about this issue?