All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I reached out to Splunk Support regarding this issue and they mentioned that this is associated with a known issue(SPL-245333), this is not fixed yet but the expected version will be until 9.3. So, i... See more...
I reached out to Splunk Support regarding this issue and they mentioned that this is associated with a known issue(SPL-245333), this is not fixed yet but the expected version will be until 9.3. So, it is understandable why we would still see it in current versions. If they are so annoying in logs, you might blacklist those events.  https://community.splunk.com/t5/Getting-Data-In/Filtering-events-using-NullQueue/m-p/66392 
Hello @bil151515  we have done it successfully if needed.
Hi @houys , your question is too vague, could you better describe your data? which data sources (technologies)? Ciao. Giuseppe
Hi Community, We are using the Splunk Enterprise. From the Splunk Search & Reporting, how can we sum the site's traffic, like the monthly bandwidth? Thanks, Steve
Hi @scout29 , read this https://docs.splunk.com/Documentation/Splunk/9.2.2/Search/Addsparklinestosearchresults otherwise you could use timechart instead stats. ciao. Giuseppe
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splun... See more...
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as Usage by idx | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | sort -Usage Now i would like to add a sparkline next to the Usage column showing the trend of ingestion over the past 7 days for each index. How can i do this ?
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\a... See more...
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\appserver\static\index.js. My goal is to dynamically add fields to a form using JavaScript inside my HTML file. I'm encountering challenges with loading the JavaScript due to potential security concerns in Splunk apps. Despite this, I'm looking for a solution to implement this small functionality. Any assistance would be greatly appreciated. Thank you for your help! HTML code -  <!DOCTYPE html> <html> <head> <title>Custom Alert Action</title> <script></script> <script src="../../../../appserver/static/index.js"></script> </head> <body> <h1>Custom Alert Action</h1> <button id="performActionBtn">Perform Action</button> </body> </html> JS code - require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { console.log('js loaded successfully'); function createfunction() { alert('js loaded successfully!'); } $(document).ready(function() { console.log('js loaded successfully'); $('#performActionBtn').click(function(event) { event.preventDefault(); createfunction(); }); }); }); hi dear @avikramengg, I saw a similar question you asked earlier. Have you found a solution? If so, could you please advise me as well? Thanks!
Actually, you can blacklist by eventcodes or by regex. (and with a caveat that if you use renderXml=true, you have to specify blacklist differently).
You can of course write tons of custom JS and embed that code in a Splunk dashboard. This way you could do practically anything (including - for example - running arcade games in jsmess) but will it ... See more...
You can of course write tons of custom JS and embed that code in a Splunk dashboard. This way you could do practically anything (including - for example - running arcade games in jsmess) but will it be "in Splunk"? I don't think so.
The way to get "data like bots dataset" would be to ingest it with a UF and then copy out buckets with indexed data. Also remember that if an incident had already happened the attackers might have r... See more...
The way to get "data like bots dataset" would be to ingest it with a UF and then copy out buckets with indexed data. Also remember that if an incident had already happened the attackers might have removed as many traces of their activity as they could. You can try to do some forensic analysis but that's not something Splunk is meant for. Yes, in a skilled person's hands it can be a tool helping in such analysis but it's not a forensic solution.
As far as I remember, the addon isn't that great and has some problems with itself, especially with CEF events. Probably most reliable way of getting the data would be to send the jsons to HEC endpoi... See more...
As far as I remember, the addon isn't that great and has some problems with itself, especially with CEF events. Probably most reliable way of getting the data would be to send the jsons to HEC endpoint. Then you could extract CIM mappings from the app and wrap them into your own app.
Hi @vanvan , I had a similar issue, my hint is to use rsyslog (or syslog-ng) server to take and write logs in files, then use the HF to read these files and elaborate and send them to Indexers. In ... See more...
Hi @vanvan , I had a similar issue, my hint is to use rsyslog (or syslog-ng) server to take and write logs in files, then use the HF to read these files and elaborate and send them to Indexers. In this way you have two advantages: better performaces and less issues related to Splunk overload, syslogs are received also when Splunk is down. Then, if you have fast disks and you haven't a slow network, you can use parallel_pipeline to better use your CPUs. How many CPUs there are in your HFs? I had 16 CPUs and I passed to 24 to have more performant queues. then you could optimize your configuration enlarging queues that you can check on your search heads running this search: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time Last hint: check the regexes on your custom add-ons to avoid unuseful overload. Ciao. Giuseppe
With this data you will have some "bad events" - while you might be able to extract the structures from the middle but you will have some dangling "headers" or "footers". I'd suggest you pass this th... See more...
With this data you will have some "bad events" - while you might be able to extract the structures from the middle but you will have some dangling "headers" or "footers". I'd suggest you pass this through some external filter extracting the contents based on structure, not just breaking with regex.
Try this variation on the settings.  It should better account for newlines. [custom_json_sourcetype] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = }(,[\S\s]*){
Hi, Have you try to change this parameter ?  pipelineSetSelectionPolicy = round_robin | weighted_random  If this parameter is configure to "round_robin", this can explain your issue. Change to wei... See more...
Hi, Have you try to change this parameter ?  pipelineSetSelectionPolicy = round_robin | weighted_random  If this parameter is configure to "round_robin", this can explain your issue. Change to weighted_random (idx) and check if it's better in monitoring console. I think you can put "parallelIngestionPipelines" to 4 in your uf if you it's configure to 2 in idx.
Can you open the search (from the dashboard table) in a separate table and share the search being used?
Here is a runanywhere example showing it working | makeresults format=json data="[{ \"key 1\": { \"field1\": \"x\" }, \"a.b.c:d-1.0.0\": { \"field2\": \"xx\" }, \... See more...
Here is a runanywhere example showing it working | makeresults format=json data="[{ \"key 1\": { \"field1\": \"x\" }, \"a.b.c:d-1.0.0\": { \"field2\": \"xx\" }, \"key3\": { \"field3\": \"xxx\" } }]" | rename _raw as field | table field | eval Name_A=json_array_to_mv(json_keys(field)) | mvexpand Name_A | eval Name_B=json_array_to_mv(json_keys(json_extract_exact(field,Name_A))) What else can you tell us about the key names?
Thanks for pointing me in the right direction. I slightly modified the search and it now works:   index=wineventlog EventCode=5145 file_name="\\\\*\\IPC$" RelativeTargetName IN (samr,lsarpc,srvsvc,... See more...
Thanks for pointing me in the right direction. I slightly modified the search and it now works:   index=wineventlog EventCode=5145 file_name="\\\\*\\IPC$" RelativeTargetName IN (samr,lsarpc,srvsvc,winreg) src_user!=*$ | stats count by src_user,src_ip,RelativeTargetName,host_fqdn | stats list(RelativeTargetName) as all by src_ip, src_user,host_fqdn | where mvcount(all) = 4
As an additional hint, you could add your all four search term literally to limit the initial search results for a bit of a performance boost. index=wineventlog EventCode=5145 file_name="\\\\*\\IPC$... See more...
As an additional hint, you could add your all four search term literally to limit the initial search results for a bit of a performance boost. index=wineventlog EventCode=5145 file_name="\\\\*\\IPC$" RelativeTargetName IN (samr,lsarpc,srvsvc,winreg) src_user!=*$ samr lsarpc srvsvc winreg | stats count by src_user,src_ip,RelativeTargetName,host_fqdn | stats list(RelativeTargetName) by src_ip, src_user,host_fqdn But whether this is significantly beneficial you'd have to see the job inspect page. Another way to limit your results (as opposed to @ITWhisperer 's solution which works on the summarized data) would be to add all four values explicitly as field values, not with the IN clause. index=wineventlog EventCode=5145 file_name="\\\\*\\IPC$" RelativeTargetName=samr, RelativeTargetName=lsarpc RelativeTargetName=srvsvc RelativeTargetName=winreg src_user!=*$
if I remove the asterix, Exclude Command input ignore any input even single input will ignored so it only show table from find command.