All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @palyogit  Looking at this I think there are two issues. Im not entirely sure they are related as others have suggested, because you would usually expect an event to be dropped if it hits the TRU... See more...
Hi @palyogit  Looking at this I think there are two issues. Im not entirely sure they are related as others have suggested, because you would usually expect an event to be dropped if it hits the TRUNCATE limit, you would just be left with the first 10,000 characters. The first thing to do is increase that 10000 limit - are you expecting the events to be this large? # props.conf # [httpevent] # Increase to a number bigger than the events which are being truncated. TRUNCATE=50000 The other log line which caught my eye is: RegexExtractor: Interpolated to processor::nullqueue especially because you are missing the events entirely. Do you have any props which are setting the nullqueue? Please can you do a btool and share hte output? $SPLUNK_HOME/bin/splunk cmd btool props list --debug httpevent  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@krishna4murali  That behavior isn't expected—using 0 11 * * 1,4 in your cron schedule should result in the job triggering only at 11:00 AM on Mondays and Thursdays. Could you confirm: Is the job a... See more...
@krishna4murali  That behavior isn't expected—using 0 11 * * 1,4 in your cron schedule should result in the job triggering only at 11:00 AM on Mondays and Thursdays. Could you confirm: Is the job actually running at 11:00 AM on Tuesdays and Wednesdays, or at any other unexpected times? Please check: Review the time zone setting on the system host where this cron is configured. Also for testing try to configure with below just to make sure it's executing as expected or not. 0 12 * * * Ideally should execute At 12 PM every day, on the hour Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Thanks for reply @livehybrid ,  There are no invalid characters on cron and there is no space between the comma and after last digit.  
Hi @harryvdtol  Do you have a dynamic number of results that would prevent you from creating a series of Column charts? If the number is fixed I'd suggest creating a number of charts based on the sa... See more...
Hi @harryvdtol  Do you have a dynamic number of results that would prevent you from creating a series of Column charts? If the number is fixed I'd suggest creating a number of charts based on the same base search would be the best way to achieve this. The only other thing I can think of is to use only the trend section of the single value chart, but that would be a line chart and not the same controls etc as a stadnard line chart would have.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @krishna4murali  Does it run at the expected time of 11:00? Im wondering if there is an invalid character in the 1,4 part somewhere thats causing it to run everyday.  Just to clarify, theres no... See more...
Hi @krishna4murali  Does it run at the expected time of 11:00? Im wondering if there is an invalid character in the 1,4 part somewhere thats causing it to run everyday.  Just to clarify, theres no space between the comma and the next digit? No characters after the 4?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
0 11 * * 1,4 means it should be executed on Monday and Thursdays only accoridng to crontab.guru.
Hi @krishna4murali , which are the days when the alert must be executed? as you can read at https://it.wikipedia.org/wiki/Crontab, using the cron expression  0 11 * * 1,4 you execute it on Sunday a... See more...
Hi @krishna4murali , which are the days when the alert must be executed? as you can read at https://it.wikipedia.org/wiki/Crontab, using the cron expression  0 11 * * 1,4 you execute it on Sunday and Wednseday. Then, check the execution days of your alert, anyway, on Tuesday it shouldn't be executed. Ciao. Giuseppe
@palyogit  Two main things are highlighting in the log WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded... regexExtractionProcessor - Interpolated to proc... See more...
@palyogit  Two main things are highlighting in the log WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded... regexExtractionProcessor - Interpolated to processor::nullqueue Looks like your truncating limit is hitting and discarding the event. Increase TRUNCATE Limit in props.conf and test again. Eg: [httpevent] TRUNCATE = 20000 Also you can refer below, #https://help.splunk.com/en/data-management/collect-http-event-data/use-hec-in-splunk-enterprise/http-event-collector-example Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
A alert is configured to schedulre cron trigger with expression 0 11 * * 1,4.   But its triggering on non specific days, like on Tuesday and Wednseday?   What could be the problem here? i checked... See more...
A alert is configured to schedulre cron trigger with expression 0 11 * * 1,4.   But its triggering on non specific days, like on Tuesday and Wednseday?   What could be the problem here? i checked using crontab.guru, its showing correct validation.   Thanks In Advance.
Hi @dm1 , until today, I always used the second one in many hundreds of project without any issue. the fact that it isn't unsupported it's a new for me, but probably it was an oversight of mine. T... See more...
Hi @dm1 , until today, I always used the second one in many hundreds of project without any issue. the fact that it isn't unsupported it's a new for me, but probably it was an oversight of mine. The first one is Cisco supported so you could use it. About instuctions for ingestion, I'm not a network specialist, but Catalysts, as other network appliances, should send their logs by syslog, so you can directly receive syslogs using Splunk, in an Heavy Forwarder, or (better),creating an rsyslog input that writes syslogs in a file that it is read by Splunk. Ciao. Giuseppe
@palyogit  Check this documentation and try to send an sample events to HEC.  https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-data-with-http-event-collector/http-event-... See more...
@palyogit  Check this documentation and try to send an sample events to HEC.  https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-data-with-http-event-collector/http-event-collector-examples  https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-data-with-http-event-collector/use-curl-to-manage-http-event-collector-tokens-events-and-services 
@palyogit  Ensure that your HEC input includes valid index= . Missing or mis-typed values cause Splunk to drop data. HttpInputDataHandler - handled token name=embedded … events_processed=1 … Trunca... See more...
@palyogit  Ensure that your HEC input includes valid index= . Missing or mis-typed values cause Splunk to drop data. HttpInputDataHandler - handled token name=embedded … events_processed=1 … Truncating line because limit of 10000 bytes … it means Splunk HEC received the event, parsed it, but truncated the line at ~10 kB, which likely leads to it being dropped  before indexing 
I need to onboard Cisco Catalyst 8500 router logs into Splunk. When I was looking for addons, I found the below addons that seem relevant  Cisco Catalyst Add-on for Splunk - This is preferred as it... See more...
I need to onboard Cisco Catalyst 8500 router logs into Splunk. When I was looking for addons, I found the below addons that seem relevant  Cisco Catalyst Add-on for Splunk - This is preferred as its Cisco built and supported. https://splunkbase.splunk.com/app/7538 Then there is this addon Add-on for Cisco Network Data - https://splunkbase.splunk.com/app/1467, but it is unsupported. The instructions in the Cisco built addon are not very clear on how to onboard the router logs.  Can someone please help?
Hello, In Splunk i have a query that i use to show data with an xyseries. The output should be displayed as a Column-chart in "in Dashboard Studio" But when i save this dashboard i recieved this... See more...
Hello, In Splunk i have a query that i use to show data with an xyseries. The output should be displayed as a Column-chart in "in Dashboard Studio" But when i save this dashboard i recieved this error "Dashboard Studio only supports Trellis layout for single value visualizations." This example query: | makeresults | eval Displayname="DSP10", Month="2505-06", duration=100 | append [ | makeresults | eval Displayname="DSP10", Month="2505-07", duration=200 ] | append [ | makeresults | eval Displayname="DSP20", Month="2505-06", duration=50 ] | append [ | makeresults | eval Displayname="DSP20", Month="2505-07", duration=90 ] | table Month Displayname duration | xyseries Month Displayname duration   Are there any other options to display this in Studio in a Trellis layout as Column Chart of a Line? Regards, Harry
http event data is not received at index   though in the log it says HttpInputDataHandler - handled token name=xyz   How do i debug this i checked splunkd.log and could not find anything fishy  ... See more...
http event data is not received at index   though in the log it says HttpInputDataHandler - handled token name=xyz   How do i debug this i checked splunkd.log and could not find anything fishy    07-16-2025 16:14:39.809 +0800 DEBUG HttpInputDataHandler - handled token name=embedded, channel=n/a, source_IP=x.y.z.a, reply=0, events_processed=1, http_input_body_size=10338, parsing_err="", body_chunk="{"action": "queued", "workflow_job": {"id": 46075907488, "run_id": 16313804135, "workflow_name": "linux-ci-pipeline", "head_branch": "dts_changes", "run_url": "https://api.github.com/repos/org/repo-name/actions/runs/16313804135", "run_attempt": 1, "node_id": "CR_kwDOHHhjyM8AAAAKulaNoA", "head_sha": "9fd419d2fcd5fc775c4b61a5392133630d5763b8", "url": "https://api.github.com/repos/org/repo-name/actions/job" 07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::/infrastructure/da_infra/splunk/tarball/splunk_instance/splunk/var/log/splunk/metrics.log|host::baip052|splunkd|2532 07-16-2025 16:14:39.809 +0800 INFO UTF8Processor - Converting using CHARSET="UTF-8" for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to metrics_log_clone::s 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted metrics_log_clone::s 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using truncation length 10000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to _metrics 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - LB_CHUNK_BREAKER uses truncation length 2000000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using lookbehind 100 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted _metrics 07-16-2025 16:14:39.809 +0800 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 10338 - data_source="http:embedded", data_host="10.244.215.89:8088", data_sourcetype="httpevent" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to group::pipeline 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted group::pipeline 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to name::dev-null 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted name::dev-null 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to processor::nullqueue 07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::http:embedded|host::10.244.215.89:8088|httpevent|
A search with specific  start/end date (for the test always the same) took 223.179 secs.. from which 117.82 secs.  It always seems the time with this command.search.kv is almost half the time of sear... See more...
A search with specific  start/end date (for the test always the same) took 223.179 secs.. from which 117.82 secs.  It always seems the time with this command.search.kv is almost half the time of search duration. We have a clustered site. 12 indexers and smartstore. First bvlame was smartstore.. but the buckets are in cache so no smartstore anymore. The question is why are the fields extracted (from wineventlogs) when there are no fieldextractions in the apps (or are winevents extracted buy default because the server is "vanilla"). It seems to cost a lot off time to extract those.
Thank you and livehybrid for the assistance. I went ahead and made a local folder and copy pasted the inputs.conf file and replaced the passAuth variable with passAuth = sc_admin as I believe that... See more...
Thank you and livehybrid for the assistance. I went ahead and made a local folder and copy pasted the inputs.conf file and replaced the passAuth variable with passAuth = sc_admin as I believe that's the admin user. I also added all the available roles I could just for testing purposes. Unfortunately I haven't received any events so I'm wondering if I did something wrong and if there's a debug/log somewhere if there were something wrong. The audit is set for every 60 seconds so I should be getting something every minute but it just stopped entirely. I did restart the service, refreshed, and toggled the audit input on and off. Screenshots attached.      
Hello, we are using Splunk Observability migrating from another solution. We have certain scripts that will validate aggregated metrics (namely average of a p99). Working with splunk observability w... See more...
Hello, we are using Splunk Observability migrating from another solution. We have certain scripts that will validate aggregated metrics (namely average of a p99). Working with splunk observability we are having difficult finding and api/method that will give us this information stablishing a single metric,value in a given timeline.  This is what we want to achieve: From X to Y give me average of P99 for "latency_metric". The expected result should be a single data point what is the average p99 of latency metric from that timeframe, namely something like: 300ms Any idea of what can we use? 
@PrewinThomas  Just confirming Will it capture warcraft-9.0.78\logs\*
@livehybrid  Just confirming Will it capture warcraft-9.0.78\logs\*