All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ryanaa- I think question better suitable for Machine Learning community.
@joe2- I would like to clarify few points and I think you will get the idea on how you can do something like that: Your query-1 is not working, because it seems you are using the old query, that ma... See more...
@joe2- I would like to clarify few points and I think you will get the idea on how you can do something like that: Your query-1 is not working, because it seems you are using the old query, that macro from old query does not exist anymore it seems. The new query is based on firewall data. Here - https://research.splunk.com/network/1ff9eb9a-7d72-4993-a55e-59a839e607f1/ But because this is dependent on firewall traffic data, it only works when you have firewall between those two machines. It could be traditional firewall or AWS firewall or anything. For your query-2, again you are looking for source=firewall* data. And windows data contain contain that sourcetype that's why you are seeing no results.   Summary: If you have the traffic monitoring device in-between those two machines, use the traffic logs to detect it. https://research.splunk.com/network/1ff9eb9a-7d72-4993-a55e-59a839e607f1/ https://research.splunk.com/network/3141a041-4f57-4277-9faa-9305ca1f8e5b/ But you don't then the only option is to have something on the Windows Victim device that logs all traffic to the machine, and use that traffic logs to build the query.   I hope this helps!!! Kindly upvote if it does!!!
@darling- Its public release App now, I don't think there should be any restriction on which Cloud stack you can install. But for better clarity maybe you can contact Splunk support for Cloud & they... See more...
@darling- Its public release App now, I don't think there should be any restriction on which Cloud stack you can install. But for better clarity maybe you can contact Splunk support for Cloud & they should resolve your confusion.   I hope this helps!!!
Your solution works perfectly. I still need to do some wider testing to make sure there's no gaps, but it looks like exactly what I need.....the only issue is....I'm not sure *exactly* what it works.... See more...
Your solution works perfectly. I still need to do some wider testing to make sure there's no gaps, but it looks like exactly what I need.....the only issue is....I'm not sure *exactly* what it works. I know what fillnull and eval do, but the way you've used mvfilter confuses me. If you have the time, could you explain in simple terms how your solution works, pelase?
Thnak you for your help.   For example, If I have a MV field with the values "red", "blue", "N/A", "N/A" I would want to filter out the "N/A" fields. However, if instead I have an MV field with ... See more...
Thnak you for your help.   For example, If I have a MV field with the values "red", "blue", "N/A", "N/A" I would want to filter out the "N/A" fields. However, if instead I have an MV field with the single value "red", then I would want it left alone And third, if I have an MV field with the values "N/A", "N/A", and "N/A", then I would want it left alone. Only when there's a MV field with both the "N/A" field and a non-N/A  field do I want the N/A fields removed.
hi, could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore h... See more...
hi, could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore https://www.mongodb.com/docs/manual/administration/production-notes/ i hope this help
Ok. If your initial stats doesn't include _time field, there's nothing to bin. That's why you're getting no results.
What do you mean by "check"? Do you filter your initial results so that you have only those where field F contains at least two values of which one is 'N/A' and one isn't? Or do you want to do a cond... See more...
What do you mean by "check"? Do you filter your initial results so that you have only those where field F contains at least two values of which one is 'N/A' and one isn't? Or do you want to do a conditional evaluation? (All other values which do not contain 'N/A' are left as they were).
| eval filtered=mvfilter(mvfield!="N/A") | fillnull value="N/A" filtered
@AJH2000  Try fetching logs manually using Postman or CURL.  If you get a 200 OK response with logs, the API is working. If there are authentication errors, check your token and API permissions.
Hi @pedropiin , you stats and bin statemets are wrong, please try this: <your_search> | bin span=1d _time | eval var1=... | eval var2=... | sort var2 | eval var3=... | stats count(var1) AS va... See more...
Hi @pedropiin , you stats and bin statemets are wrong, please try this: <your_search> | bin span=1d _time | eval var1=... | eval var2=... | sort var2 | eval var3=... | stats count(var1) AS var1 count(var2) AS var2 count(var3) AS var3 BY day  About sensitive information, you can mask them, for me it's interesting only the event structure and the field extractions. Ciao. Giuseppe
@dtaylor  Check this,  if a multivalue field contains both "N/A" and at least one non-"N/A" value. If both conditions are met, it removes "N/A" and returns the remaining values otherwise, it keeps t... See more...
@dtaylor  Check this,  if a multivalue field contains both "N/A" and at least one non-"N/A" value. If both conditions are met, it removes "N/A" and returns the remaining values otherwise, it keeps the field unchanged.   
I've been smashing my head against this issue for the past few hours. I need to check a multivalue field to see if it contains the "N/A" *and* any value that isn't "N/A". If this is true, I need to f... See more...
I've been smashing my head against this issue for the past few hours. I need to check a multivalue field to see if it contains the "N/A" *and* any value that isn't "N/A". If this is true, I need to filter whatever "N/A" exist within the field and return the remaining non-N/A values as a multivalue field.
Thank you for your reply. I have found the cause of the issue, there was a mistake in executing the query in the app, which does not have permission to access the database on Splunk. Thanks again!
I'm also assuming that you've already set maxKBps = 0 in limits.conf: # $SPLUNK_HOME/etc/system/local/limits.conf [thruput] maxKBps = 0  
Hi @MichaelM1, Increasing parallelIngestionPipelines to a value larger than 1 is similar to running multiple instances of splunkd with splunktcp inputs on different ports. As a starting point, howev... See more...
Hi @MichaelM1, Increasing parallelIngestionPipelines to a value larger than 1 is similar to running multiple instances of splunkd with splunktcp inputs on different ports. As a starting point, however, I would leave parallelIngestionPipelines unset or at the default value of 1. splunkd uses a series of queues in a pipeline to process events. Of note: parsingQueue aggQueue typingQueue rulesetQueue indexQueue There are other queues, but these are the most well-documented. See https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774/highlight/true#M103484. I have copies of the printer and high-DPI display friendly PDFs if you need them. On a typical universal forwarder acting as an intermediate forwarder, parsingQueue, which performs minimal event parsing, and indexQueue, which sends events to outputs, are the likely bottlenecks. Your metrics.log event provides a hint: <date time> Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=1217, largest_size=1217,smallest_size=0 Note that metrics.log logs queue names in lower case, but queue names are case-sensitive in configuration files. parsingQueue is blocked because 1217KB is greater than 512KB. The inputs.conf splunktcp stopAcceptorAfterQBlock setting controls what happens to the listener port when a queue is blocked, but you don't need to modify this setting. In your case, I would start by leaving parallelIngestionPipelines at the default value of 1 as noted above and increasing indexQueue to the next highest factor of 128 bytes larger than twice the largest_size value observed for parsingQueue. In %SPLUNK_HOME\etc\systeml\local\server.conf on the intermediate forwarder: [queue=indexQueue] # 2 * 1217KB <= 20 * 128B = 2560KB maxSize = 2560KB (x86-64, ARM64, and SPARC architectures have 64 byte cache lines, but on the off chance you encounter AIX on PowerPC with 128 byte caches lines, for example, you'll avoid buffer alignment performance penalties, closed-source splunkd memory allocation overhead notwithstanding.) Observe metrics.log following the change and keep increasing maxSize until you no longer see instances of blocked=true. If you run out of memory, add more memory to your intermediate forwarder host or consider scaling your intermediate forwarders horizontally with additional hosts. As an alternative, you can start by increasing maxSize for parsingQueue and only increase maxSize for indexQueue if you see blocked=true messages in metrics.log: [queue=parsingQueue] maxSize = 2560KB You can usually find the optimal values through trail and error without resorting to a queue-theoretic analysis. If you find that your system becomes CPU-bound at some maxSize limit, you can increase parallelIngestionPipelines, for example, to N-2, where N is the number of cores available. Following that change, modify maxSize from default values by observing metrics.log. Note that each pipeline consumes as much memory as a single-pipeline splunkd process with the same memory settings.
As others already said that license is generated when you have installed splunk 1st time. After that you have officially the next options. Convert license to free, update it to dev/test (if your comp... See more...
As others already said that license is generated when you have installed splunk 1st time. After that you have officially the next options. Convert license to free, update it to dev/test (if your company have paid Splunk license), install developer license, try to get extension to trial from splunk/partner if you are evaluating it and almost ready to pay official version, pay for official license and the last option is uninstall splunk, remove stuff from disk and then reinstall it.
If you have exactly same error which said that volume is not defined, then check the definition of this volume. There is probably some issues with it.
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like... See more...
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like HFs to those. Then you can query from those by using that group on rest. 
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.... See more...
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.4 as of this writing.   The Upgrade Readiness App detected 1 app with deprecated Python on the my-server instance. config_explorer I have confirmed that in $SPLUNK/bin that the python3.7m executable reports it is version 3.7.17 and I have viewed the jsquery.js file in $SPLUNK/etc/apps/splunk_monitoring_console/src/visualizations/heatmap/node_modules/jquery/dist/. is jQuery JavaScript Library v3.5.0/jsquery.js  which is the only jsquery.js file in the $SPLUNK subdirectories. Why is the Upgrade Readiness App reporting a deprecated Python and why does my Splunk get warnings about Python and JQuery incompatibilities.