All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No.   Let me explain with example. Event with timestamp 3:14 comes at 3:17 Event with timestamp 3:15 comes at 3:18 That means if you make a search at 3:18 with earliest=-2m@m latest=now This m... See more...
No.   Let me explain with example. Event with timestamp 3:14 comes at 3:17 Event with timestamp 3:15 comes at 3:18 That means if you make a search at 3:18 with earliest=-2m@m latest=now This means it will search event between 3:16 to 3:18 Locally this will not include any events, never. Because events are always 3 minutes delay. Solution is to never search last 3 minutes when writing search and wait for those events to populate in the next schedule with: earliest=-63m@m latest=-3m@m OR earliest=13m@m latest=-3m@m OR earliest can be anything but keep the latest in a way that it never search recent events, to keep events being missed issue.   There is another solution as well with _index_earliest and _index_latest, but that's topic for another time. (a bit complicated)   I hope this helps!!
Hi @gcusello, We have all Drupal sites and the raw log likes this: {"time":"2024-07-18T09:29:59.900525659-05:00","stream":"stdout","logtag":"F","message":"10.42.11.59 - - [18/Jul/2024:14:29:59 +000... See more...
Hi @gcusello, We have all Drupal sites and the raw log likes this: {"time":"2024-07-18T09:29:59.900525659-05:00","stream":"stdout","logtag":"F","message":"10.42.11.59 - - [18/Jul/2024:14:29:59 +0000] \"POST / HTTP/1.1\" 200 46989 \"-\" \"Microsoft Office/16.0 (Windows NT 10.0; Microsoft Word 16.0.17628; Pro)\"","kubernetes":{"pod_name":"apache-4","namespace_name" ... We want to calculate total bandwidth.
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for w... See more...
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for workflow endpoint. Also using standard webhook and providing workflow URL is returning errors since the payload is not in the format of the Adaptable card API message that workflows expect do you have a solution how to connect alerts with Microsoft teams channels now since this depreciation of connectors
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records ... See more...
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records in the given timerange. The time is a simple time picker and the base search is a simple: index=myindex | stats count I would like to add a panel, maybe single viz, that shows a percent change. For example, if the default is "Last 24 hours" I would like to show the count of the last 24 hours and the percent change from the previous 24 hours. Additionally, if the user selected "Last 7 days" i would like it to give the count of the last 7 days and the percent change from 7 days before that.   Thanks for the help
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProce... See more...
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProcess host=<any one SH or IDX host> | timechart span=5s sum(data.mem_used) as mem_usedMB by data.process_type useother=f usenull=f       Example   If memory usage by `search-launcher` is way higher than `search` Then idle search process pool(search-launcher) is wasting system memory. If you see above trend, we want to reduce idle search process pool. There are several options to reduce idle search process pool in limits.conf One option is to set enable_search_process_long_lifespan = false in server.conf( new option in 9.1 and above)   enable_search_process_long_lifespan = <boolean> * Controls whether the search process can have a long lifespan. * Configuring a long lifespan on a search process can optimize performance by reducing the number of new processes that are launched and old processes that are reaped, and is a more efficient use of system resources. * When set to "true": Splunk software does the following: * Suppresses increases in the configuration generation. See the 'conf_generation_include' setting for more information. * Avoids unnecessary replication of search configuration bundles. * Allows a certain number of idle search processes to live. * Sets the size of the pool of search processes. * Checks memory usage before a search process is reused. * When set to "false": The lifespan of a search process at the 50th percentile is approximately 30 seconds. * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true   Why idle search process pool appears to be un-used(more idle searches compared to the actual number of searches running on peer)? Before a search request is dispatched to peers, SHCs/SHs also need to first find  the common knowledge bundle across peers. On peer, only an idle search process created with matching common knowledge bundle is eligible for re-use. That's why in most cases idle search process pool remains un-used as overall idle search process pool is a collection of idle search processes  having association with different knowledge bundles.  Now think of a scenario having multiple SHC clusters (example ES/ITSI/ad-hoc etc). Each SH cluster replicating it's own knowledge bundles. The idle search process pool is a collection of idle search processes  having association with different knowledge bundles from different search heads. You can search enable_search_process_long_lifespan in limits.conf for the impact. It  controls lot of configs.  But the main reason for memory growth is  max_search_process_pool (default 2048 idle search process pool). max_search_process_pool = auto | <integer> * The maximum number of search processes that can be launched to run searches in the pool of preforked search processes. * The setting is valid if the 'enable_search_process_long_lifespan' setting in the server.conf file is set to "true". * Use this setting to limit the total number of running search processes on a search head or peer that is prevented from being overloaded or using high system resources (CPU, Memory, etc). * When set to "auto": Splunk server determines the pool size by multiplying the number of CPU cores and the allowed number of search processes (16). The pool size is 64 at minimum. * When set to "-1" or another negative value: The pool size is not limited. * Has no effect on Windows or if "search_process_mode" is not "auto". * Default: 2048   If an instance is running 1000 searches per minute, assuming bundle replication is not frequent, why to create 2048 idle searches pool when the max requirement is 1000? With surplus memory resource, this is not an issue. 2048 idle searches pool is not ok for limited memory instances.
I reached out to Splunk Support regarding this issue and they mentioned that this is associated with a known issue(SPL-245333), this is not fixed yet but the expected version will be until 9.3. So, i... See more...
I reached out to Splunk Support regarding this issue and they mentioned that this is associated with a known issue(SPL-245333), this is not fixed yet but the expected version will be until 9.3. So, it is understandable why we would still see it in current versions. If they are so annoying in logs, you might blacklist those events.  https://community.splunk.com/t5/Getting-Data-In/Filtering-events-using-NullQueue/m-p/66392 
Hello @bil151515  we have done it successfully if needed.
Hi @houys , your question is too vague, could you better describe your data? which data sources (technologies)? Ciao. Giuseppe
Hi Community, We are using the Splunk Enterprise. From the Splunk Search & Reporting, how can we sum the site's traffic, like the monthly bandwidth? Thanks, Steve
Hi @scout29 , read this https://docs.splunk.com/Documentation/Splunk/9.2.2/Search/Addsparklinestosearchresults otherwise you could use timechart instead stats. ciao. Giuseppe
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splun... See more...
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as Usage by idx | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | sort -Usage Now i would like to add a sparkline next to the Usage column showing the trend of ingestion over the past 7 days for each index. How can i do this ?
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\a... See more...
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\appserver\static\index.js. My goal is to dynamically add fields to a form using JavaScript inside my HTML file. I'm encountering challenges with loading the JavaScript due to potential security concerns in Splunk apps. Despite this, I'm looking for a solution to implement this small functionality. Any assistance would be greatly appreciated. Thank you for your help! HTML code -  <!DOCTYPE html> <html> <head> <title>Custom Alert Action</title> <script></script> <script src="../../../../appserver/static/index.js"></script> </head> <body> <h1>Custom Alert Action</h1> <button id="performActionBtn">Perform Action</button> </body> </html> JS code - require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { console.log('js loaded successfully'); function createfunction() { alert('js loaded successfully!'); } $(document).ready(function() { console.log('js loaded successfully'); $('#performActionBtn').click(function(event) { event.preventDefault(); createfunction(); }); }); }); hi dear @avikramengg, I saw a similar question you asked earlier. Have you found a solution? If so, could you please advise me as well? Thanks!
Actually, you can blacklist by eventcodes or by regex. (and with a caveat that if you use renderXml=true, you have to specify blacklist differently).
You can of course write tons of custom JS and embed that code in a Splunk dashboard. This way you could do practically anything (including - for example - running arcade games in jsmess) but will it ... See more...
You can of course write tons of custom JS and embed that code in a Splunk dashboard. This way you could do practically anything (including - for example - running arcade games in jsmess) but will it be "in Splunk"? I don't think so.
The way to get "data like bots dataset" would be to ingest it with a UF and then copy out buckets with indexed data. Also remember that if an incident had already happened the attackers might have r... See more...
The way to get "data like bots dataset" would be to ingest it with a UF and then copy out buckets with indexed data. Also remember that if an incident had already happened the attackers might have removed as many traces of their activity as they could. You can try to do some forensic analysis but that's not something Splunk is meant for. Yes, in a skilled person's hands it can be a tool helping in such analysis but it's not a forensic solution.
As far as I remember, the addon isn't that great and has some problems with itself, especially with CEF events. Probably most reliable way of getting the data would be to send the jsons to HEC endpoi... See more...
As far as I remember, the addon isn't that great and has some problems with itself, especially with CEF events. Probably most reliable way of getting the data would be to send the jsons to HEC endpoint. Then you could extract CIM mappings from the app and wrap them into your own app.
Hi @vanvan , I had a similar issue, my hint is to use rsyslog (or syslog-ng) server to take and write logs in files, then use the HF to read these files and elaborate and send them to Indexers. In ... See more...
Hi @vanvan , I had a similar issue, my hint is to use rsyslog (or syslog-ng) server to take and write logs in files, then use the HF to read these files and elaborate and send them to Indexers. In this way you have two advantages: better performaces and less issues related to Splunk overload, syslogs are received also when Splunk is down. Then, if you have fast disks and you haven't a slow network, you can use parallel_pipeline to better use your CPUs. How many CPUs there are in your HFs? I had 16 CPUs and I passed to 24 to have more performant queues. then you could optimize your configuration enlarging queues that you can check on your search heads running this search: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time Last hint: check the regexes on your custom add-ons to avoid unuseful overload. Ciao. Giuseppe
With this data you will have some "bad events" - while you might be able to extract the structures from the middle but you will have some dangling "headers" or "footers". I'd suggest you pass this th... See more...
With this data you will have some "bad events" - while you might be able to extract the structures from the middle but you will have some dangling "headers" or "footers". I'd suggest you pass this through some external filter extracting the contents based on structure, not just breaking with regex.
Try this variation on the settings.  It should better account for newlines. [custom_json_sourcetype] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = }(,[\S\s]*){
Hi, Have you try to change this parameter ?  pipelineSetSelectionPolicy = round_robin | weighted_random  If this parameter is configure to "round_robin", this can explain your issue. Change to wei... See more...
Hi, Have you try to change this parameter ?  pipelineSetSelectionPolicy = round_robin | weighted_random  If this parameter is configure to "round_robin", this can explain your issue. Change to weighted_random (idx) and check if it's better in monitoring console. I think you can put "parallelIngestionPipelines" to 4 in your uf if you it's configure to 2 in idx.