All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The /event endpoint gives you more flexibility than /raw so I'd advise to use /event anyway. But in order for HEC input _not_ to skip the timestamp recognition (which it does by default - it either g... See more...
The /event endpoint gives you more flexibility than /raw so I'd advise to use /event anyway. But in order for HEC input _not_ to skip the timestamp recognition (which it does by default - it either gets the timestamp from the field pushed with (not in!) an event or assigns current timestamp), you must add the ?auto_extract_timestamp=true parameter to the url. Like https://your_indexer:8088/services/collector/event?auto_extract_timestamp=true
Hi. I also have a large size files on some servers, about 10Gb per day in 3 files each server, and those files during the day are very delayed to be ingested, with ACK to true. While those files de... See more...
Hi. I also have a large size files on some servers, about 10Gb per day in 3 files each server, and those files during the day are very delayed to be ingested, with ACK to true. While those files delay from 1 to also 4 hours to be indexed, other files on same servers are ingested fine in realtime. So, also with UF 8.2.12, i think it's a thruput of Network Infrastructure, or maybe too many datas from those inputs 🤷‍ I also have   [thruput] maxKBps = 0 [general] parallelIngestionPipelines = 2 [queue] maxSize = 100MB [queue=parsingQueue] maxSize = 10MB I don't think there are other methods, since it's a phisiological problem 🤷‍ The only way, maybe, is to add more Indexers in SPLUNK Infra or ask the Applicative Teams to split those file in more servers 🤷‍
To be honest, I look into any app not built and supported by Splunk. Not only due to security reasons but also many third-party provided apps are simply badly written and you can't get them to work w... See more...
To be honest, I look into any app not built and supported by Splunk. Not only due to security reasons but also many third-party provided apps are simply badly written and you can't get them to work without fixing them yourself.
1. I don't understand what you mean by "I have files sent to search head". If you're trying to use your SH also as a forwarder... well, that's not a good practice. But it shouldn't be the cause of th... See more...
1. I don't understand what you mean by "I have files sent to search head". If you're trying to use your SH also as a forwarder... well, that's not a good practice. But it shouldn't be the cause of the problem here. 2. Since you're sending SYNs, the indexer is listening on the port and apparently even gets those SYNs on the wire, there are two possible explanations - either your local firewall (iptables? firewalld? that new fancy nftables?) is filtering the packets or you have badly configured routing and packets are dropped by rp_filter.
This is simply bad data (at least from Splunk's point of view). Even if you managed to break it into events (but I gotta honestly say that I see no way to reliably make sure you break in proper plac... See more...
This is simply bad data (at least from Splunk's point of view). Even if you managed to break it into events (but I gotta honestly say that I see no way to reliably make sure you break in proper places and only in those places; manipulating structured data with just regexes is simply not reliable because regexes are not structure-aware), you'll still have those headers and footers (attached to an end of another event). Also resulting events would have inconsistent contents - one event would have "event1" field, another would be "event2". The best solution here would be to process your data and split before pushing it to Splunk.
1. This is not recursion 2. This is an old thread with possibly low visibility. Please create a new thread, describe your problem, what data you have, what results you need to raise your chances of ... See more...
1. This is not recursion 2. This is an old thread with possibly low visibility. Please create a new thread, describe your problem, what data you have, what results you need to raise your chances of getting a meaningful response.
@gcusello Our Splunk is hosted in Cloud and it is managed by Splunk Support. We have access only to Search heads and not to License master server.
It's not clear what your data is and what you want to get from it. But as general rule - you can't remove something from your data and process it later. At every pipe in your pipeline you have only ... See more...
It's not clear what your data is and what you want to get from it. But as general rule - you can't remove something from your data and process it later. At every pipe in your pipeline you have only the data you got from earliest steps. So for example if you do: index=myindex | fields - source | eval sourcematch=if(source="mysource",1,0) The field sourcematch in your results will always be 0, because you remove the field "source" from your resulting events so you can't rely on it to calculate something in further steps on your processing pipeline.
Hi @anandhalagaras1 , where did you run this search? you should try on the License Master. Ciao, Giuseppe
OK. If your overall usage of $SPLUNK_HOME differs greatly from the size of your indexes, check if you don't have crash logs and coredumps lying around (you only need them if you're gonna debug your c... See more...
OK. If your overall usage of $SPLUNK_HOME differs greatly from the size of your indexes, check if you don't have crash logs and coredumps lying around (you only need them if you're gonna debug your crashes with support; otherwiseyou can safely delete them).
@Mitesh_Gajjar I am not getting any results eventhough i ran the search query for All Time. We are using Splunk Cloud and the Cloud Monitoring Console app is installed in our Search Head. So I have ... See more...
@Mitesh_Gajjar I am not getting any results eventhough i ran the search query for All Time. We are using Splunk Cloud and the Cloud Monitoring Console app is installed in our Search Head. So I have tried the query in Search and Reporting App of the SH and also CMC app Search but no results. But actually within last 60 days we had multiple breaches occurred in over all licensing. 
 When I try to use the query I am not getting any results. I tried in Search and Reporting app as well as in CMC app.    
Hi @richgalloway ,    Thanks for your response!    I'm using this search in the macro definition, i want this to be fixed any possible ways of tweak this command to make it working. I need that ... See more...
Hi @richgalloway ,    Thanks for your response!    I'm using this search in the macro definition, i want this to be fixed any possible ways of tweak this command to make it working. I need that value in later part of this search, I just need to skip at this moment. Thanks in Advance! Manoj Kumar S
Hi @anandhalagaras1, you can try this query.  rest /services/licenser/pools | eval total_quota_gb = round(usage_quota / (1024 * 1024 * 1024), 2) | eval used_gb = round(usage_used / (1024 * 1024 * ... See more...
Hi @anandhalagaras1, you can try this query.  rest /services/licenser/pools | eval total_quota_gb = round(usage_quota / (1024 * 1024 * 1024), 2) | eval used_gb = round(usage_used / (1024 * 1024 * 1024), 2) | eval usage_percentage = round((used_gb / total_quota_gb) * 100, 2) | table total_quota_gb, used_gb, usage_percentage | where usage_percentage >= 70 AND usage_percentage < 80 | eval alert_level = "70%-79%" | eval alert_message = "License usage has reached " . usage_percentage . "%. Please take action." | eval alert_level = if(usage_percentage >= 80 AND usage_percentage < 90, "80%-89%", alert_level) | eval alert_message = if(usage_percentage >= 80 AND usage_percentage < 90, "License usage has reached " . usage_percentage . "%. Please take immediate action.", alert_message) | eval alert_level = if(usage_percentage >= 90, "90% and above", alert_level) | eval alert_message = if(usage_percentage >= 90, "License usage has crossed critical threshold at " . usage_percentage . "%. Immediate attention required!", alert_message) | table alert_level, alert_message
Hi @anandhalagaras1 , there's an error, the eval command is missed before the if, anyway, please, try to use the search in the Monitoring Console: | rest splunk_server_group=dmc_group_license_maste... See more...
Hi @anandhalagaras1 , there's an error, the eval command is missed before the if, anyway, please, try to use the search in the Monitoring Console: | rest splunk_server_group=dmc_group_license_master /services/licenser/pools | join type=outer stack_id splunk_server [rest splunk_server_group=dmc_group_license_master /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields splunk_server stack_id is_active] | search is_active=1 | fields splunk_server, stack_id, used_bytes | join type=outer stack_id splunk_server [rest splunk_server_group=dmc_group_license_master /services/licenser/stacks | eval stack_id=title | eval stack_quota=quota | fields splunk_server stack_id stack_quota] | stats sum(used_bytes) as used_bytes max(stack_quota) as stack_quota by splunk_server | eval usedGB=round(used_bytes/1024/1024/1024,3) | eval totalGB=round(stack_quota/1024/1024/1024,3) | eval percentage=round(usedGB / totalGB, 3)*100 | fields splunk_server, percentage, usedGB, totalGB | where percentage > 80 | rename splunk_server AS Instance, percentage AS "License quota used (%)", usedGB AS "License quota used (GB)", totalGB as "Total license quota (GB)" Ciao. Giuseppe  
@Mitesh_Gajjar , When i use the search query i am getting an error as below: Unknown search command 'if'. So kindly help to check and update on the same.
Hi , Yes, my attempt with appendcols work. But your one too works very well. Since you said my one takes longer search compared to yours optimized one, hence, will use your ones. Thanks much for ... See more...
Hi , Yes, my attempt with appendcols work. But your one too works very well. Since you said my one takes longer search compared to yours optimized one, hence, will use your ones. Thanks much for suggesting this better and optimized query.   Thanks
Hi @chakavak, maybe there's another server.conf, please try: cd \Program Files\splunkuniversalforwarder\bin splunk btool server list --debug > my_server.txt and search in my_server.txt if there's ... See more...
Hi @chakavak, maybe there's another server.conf, please try: cd \Program Files\splunkuniversalforwarder\bin splunk btool server list --debug > my_server.txt and search in my_server.txt if there's another "hostname" parameter in another server.conf file. Ciao. Giuseppe
That's exactly what the subsearch will do. The output of a subsearch is to make <field>=<value> OR <field>=<value>... where the results of the subsearch are in a table with field name 'field'. You ... See more...
That's exactly what the subsearch will do. The output of a subsearch is to make <field>=<value> OR <field>=<value>... where the results of the subsearch are in a table with field name 'field'. You can see the output of the subsearch just by running the search manually as a normal search and adding  | format to the end of the search, which is implicit in the subsearch
You don't really need appendcols, which will make the search longer index=product_db time="1706589466.725491" OR time="1705566003.777518" | streamstats c | eval p_name_{c}=json_array_to_mv(json_key... See more...
You don't really need appendcols, which will make the search longer index=product_db time="1706589466.725491" OR time="1705566003.777518" | streamstats c | eval p_name_{c}=json_array_to_mv(json_keys(_raw)) | stats values(p_name_*) as p_name_* | eval p_unique = mvmap(p_name_1, if(isnull(mvfind(p_name_2, "^".p_name_1."$")), p_name_1, null())) | eval p_missing = mvmap(p_name_2, if(isnull(mvfind(p_name_1, "^".p_name_2."$")), p_name_2, null())) | table p_unique p_missing Assuming you only have two events, that should work Did your attempt work?