All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ProPoPop  You can increase the speed (throughput) of your Universal Forwarder, but this will also use more network bandwidth. To do this, change the maxKBps setting in the limits.conf file on the U... See more...
@ProPoPop  You can increase the speed (throughput) of your Universal Forwarder, but this will also use more network bandwidth. To do this, change the maxKBps setting in the limits.conf file on the Universal Forwarder. More details are here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf  I also recommend following this : https://community.splunk.com/t5/Splunk-Search/How-do-you-know-when-to-increase-the-bandwidth-limits/td-p/156419  It explains how to safely increase bandwidth and check its usage. You can monitor the bandwidth usage by checking the Universal Forwarder logs on the machine at: $SPLUNK_HOME/var/log/splunk/metrics.log  Or use this Splunk search to see the data: index=_internal source="*metrics.log" group=thruput
as per the monitoring console could see indexing queue and splunktcpin queue is high
Try increasing the maxKBps setting in the forwarder's limits.conf file.  It sounds like the default of 256 is not enough.  Try 512 or 0 (unlimited).
Have you enabled receiving of data in Splunk?  Go to Settings->"Forwarding and Receiving"  to turn on receiving. Does "localhost url" include the port number (9997 by default)? Do your firewalls al... See more...
Have you enabled receiving of data in Splunk?  Go to Settings->"Forwarding and Receiving"  to turn on receiving. Does "localhost url" include the port number (9997 by default)? Do your firewalls allow connections between Mulesoft and Splunk?
thank you for answer. I will try it
So if you have the stats that results in the number, e.g. .... | stats sum(amount) as total by source | eval total=total/7*.5 but without knowing what you've tried so far, this may or may not be us... See more...
So if you have the stats that results in the number, e.g. .... | stats sum(amount) as total by source | eval total=total/7*.5 but without knowing what you've tried so far, this may or may not be useful. It's often good to show what you've done with your SPL along with an explanation of your problem as it helps us understand where you're trying to get to.  
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in th... See more...
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in the dashboard when i search like index="indexname". LOG4J2.XML FILE CHANGES <Configuration status="INFO" name="cloudhub" packages="com.mulesoft.ch.logging.appender,com.splunk.logging,org.apache.l ogging.log4j"> <Appenders> <RollingFile "Rolling file details here" </RollingFile> <SplunkHttp name="Splunk" url="localhost url" token="token" index="indexname" batch_size_count="10" disableCertificateValidation="true"> <PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n" /> </SplunkHttp> <Log4J2CloudhubLogAppender name="CloudHub" addressProvider="com.mulesoft.ch.logging.DefaultAggregatorAddressProvider" applicationContext="com.mulesoft.ch.logging.DefaultApplicationContext" appendRetryIntervalMs="${sys:logging.appendRetryInterval}" appendMaxAttempts="${sys:logging.appendMaxAttempts}" batchSendIntervalMs="${sys:logging.batchSendInterval}" batchMaxRecords="${sys:logging.batchMaxRecords}" memBufferMaxSize="${sys:logging.memBufferMaxSize}" journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}" journalMaxFileSize="${sys:logging.journalMaxFileSize}" clientMaxPacketSize="${sys:logging.clientMaxPacketSize}" clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}" clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}" serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}" serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}" statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}"> </Log4J2CloudhubLogAppender> </Appenders>   <Loggers> <AsyncLogger name="org.mule.service.http" level="WARN" /> <AsyncLogger name="org.mule.extension.http" level="WARN" /> <AsyncLogger name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO" /> <AsyncRoot level="INFO"> <AppenderRef ref="file" /> <AppenderRef ref="Splunk" /> <AppenderRef ref="CloudHub" /> </AsyncRoot> <AsyncLogger name="Splunk.Logger" level="INFO"> <AppenderRef ref="splunk" /> </AsyncLogger> </Loggers> </Configuration>   POM.XML FILE CHANGES   <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository>   <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.7.3</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.10.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.10.0</version> </dependency>   Please let me know if i am missing out on any configuration since i believe i am pretty much following what's in the mule website and other articles.
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no pro... See more...
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no problems with sending, but sometimes we are "losing" logs.  DCs have very big load of data, and when the amount of logs reach the top they are start to overwriting oldest logs. As I noticed, that problem affected only at Windows Security journals. Problem: Splunk Universal Forwarder  doesn't has time to get and send data from DC before logs got been overwritten. Could this be related to the Splunk process execution priority or load from other processes at DC? How to solve this problem? Do you have the same experience or advices to rid this problem?
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to b... See more...
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to be calculated based on the total amount(Whatever I am counting)/7(number of days)*.5. For example, the total amount of a certain week is 700 which means the calculation to the above equation is 50. If the source in question is below 50 then it would be red, above that would be yellow and what ever I made for green. How can I calculate that when I have multiple sources being used. If this doesnt make sense I can provide some data. Thank you in advance for your time and help.
There is yet another thing which can sometimes hint at whether you're getting data onto one or the other endpoint. With the /event endpoint you can push indexed fields. So if you have some non-raw-ba... See more...
There is yet another thing which can sometimes hint at whether you're getting data onto one or the other endpoint. With the /event endpoint you can push indexed fields. So if you have some non-raw-based fields which obviously weren't extracted/calculated in the ingestion pipeline (but for this you'd have to dig through your index-time configs) that would singly suggest you're getting data via/event endpoint.
Hi @CMAzurdia  My apologies, I thought you were referring to people logging in to Splunk using SAML. Ultimately this depends on what the data looks like when you receive it in Splunk? Its worth sta... See more...
Hi @CMAzurdia  My apologies, I thought you were referring to people logging in to Splunk using SAML. Ultimately this depends on what the data looks like when you receive it in Splunk? Its worth starting by identifying key fields in your data and then filtering down until you just get the events you are interested in - from here you can work through your usecase/requirements.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This is true @sverdhan  - As @PickleRick has said, this might not cover everything you're expecting.  I spent a big chunk of time once trying to find "every" combination for a project I was working ... See more...
This is true @sverdhan  - As @PickleRick has said, this might not cover everything you're expecting.  I spent a big chunk of time once trying to find "every" combination for a project I was working on to automatically notify of people doing this, however they often found clever ways around, things like using inputlookup, makeresults etc in subsearches. However, it might catch "most" of your queries - ultimately your mileage may vary!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello @VatsalJagani , Yes, we checked via Curl curl -k -X POST 'https://hec-splunk.xxxxx.net/services/collector/event' --header 'Authorization: Splunk xxxx-xxxx-xxxx-xxx-xxxx' -d '{"sourcetype": "... See more...
Hello @VatsalJagani , Yes, we checked via Curl curl -k -X POST 'https://hec-splunk.xxxxx.net/services/collector/event' --header 'Authorization: Splunk xxxx-xxxx-xxxx-xxx-xxxx' -d '{"sourcetype": "my_sample_data", "event": "2025-04-23-Test"}'   Result: {"text":"Success","code":0}%   And we can see the event in Splunk     
Check out 'charting.fieldColors' in the Dashboards and Visualization manual.
Ok. Two things. 1. Your props.conf stanza is wrong. monitor: is a type of input and you're not supposed to use it outside inputs.conf. Props should contain stanzas for sourcetype, source or host. 2... See more...
Ok. Two things. 1. Your props.conf stanza is wrong. monitor: is a type of input and you're not supposed to use it outside inputs.conf. Props should contain stanzas for sourcetype, source or host. 2. mysqld_error is one of the builtin sourcetypes. Unfortunately, it doesn't contain any timestamp recognition settings so Splunk tries to guess. Firstly I'd check if all hosts report the timestamps in events in a consistent manner.
Hi @gn694  If you are on-prem then you can set the HttpInputDataHandler component to DEBUG mode (but dont do it for long!) - this will record the contents of HEC payloads in _internal which might he... See more...
Hi @gn694  If you are on-prem then you can set the HttpInputDataHandler component to DEBUG mode (but dont do it for long!) - this will record the contents of HEC payloads in _internal which might help you work out if its raw or event endpoints. Edit the log level via Settings->Server Settings->Server Logging - search for "HttpInputDataHandler" and change to DEBUG. Shortly after you will get logs like this:   Search: index=_internal sourcetype=splunkd log_level=debug component=HttpInputDataHandler In my example, the top one was using the event endpoint and the bottom using the raw endpoint. The logs sent to the event endpoint will always have an "event" field in the body_chunk value, along with other fields like time/host/source etc. Try this and let me know how you get on!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, inc... See more...
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, incomplete_task, shortage in this case) and I use a token system to only show specific fields in my chart based on those selections.  It seems that the color order is set by default. So when I remove the first option the color changes to the new top field name. Is there an easy way to lock those to the consistently be the same colors for each potential option. So locking field color with field value.    example images of the change.  vs when the fields change   
Wait a second. Earlier you wrote (which I had missed) that your other extracted fields do work. Since you're using search-time extractions, that means that you have props defined on your search head ... See more...
Wait a second. Earlier you wrote (which I had missed) that your other extracted fields do work. Since you're using search-time extractions, that means that you have props defined on your search head tier. Do you also have those props on the correct component in ingestion path? What is your architecture and where do you have which settings?
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating syste... See more...
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating system is RedHat 7; we are in the process of migrating to RedHat 8. On upgrade from 9.1.7 to 9.2.4, one of the indexer clusters that ingests the most amount of data, suddenly had aggregation and parsing queues filled at 100% during our peak logging hours. The indexers were not using much more cpu or memory it’s just that the queues were very full. It turns out that Splunk has enabled profiling starting in 9.2: specifically cpu time profiling. These settings are controlled in limits.conf: https://docs.splunk.com/Documentation/Splunk/9.2.4/Admin/Limitsconf. There are 6 new profiling metrics and these are all enabled by default. In addition, the agg_cpu_profiling runs a lot of time of day routines.  A lot. There are several choices for clocksource in RedHat https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/chap-Timestamping#sect-Hardware_clocks It turns out that we had set our clock source to use the clocksource “hpet” some number of years ago. This clocksource, while high precision, is much slower than using “tsc”.  Once we switched to using tsc, the problem with our aggregation and parsing queues at 100% during peak hours was fixed. Even if you don't have the clock source issue, the change in profiling is something to be aware of in the upgrade to 9.2
Having the same problem had to go back to the <p></p> method that was previously in the interim.