All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @RSS_STT  It isnt possible to use typical data forwarding from Splunk Cloud to another system.  The only Splunkbase apps I have seen for things like sending to HEC or external systems generally ... See more...
Hi @RSS_STT  It isnt possible to use typical data forwarding from Splunk Cloud to another system.  The only Splunkbase apps I have seen for things like sending to HEC or external systems generally arent supported for Splunk Cloud, therefore the only other option would be to run something that uses the Search API to search the data and send it to the appropriate place. Ultimately this is a very bad idea and not something that is supported or encouraged.  What is your ultimate goal? Is there a reason you arent able to forward the data from source to multiple destinations, or use federated search to query the data from Splunk Cloud from your other instance? The only other route I can think of is using Ingest Actions to send the data to S3 and then using the AWS TA to ingest this using your other Splunk instance.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I have two separate Splunk cloud instance and want to forward specific set of data from one instance to another. Please suggest the approach or any app/add-on available for this purpose. 
Hi @shoaibalimir  Yes you can achieve this by using a bar chart, you need to separate the different status into series:   Please find dashboard studio example below: { "title": "StatusDas... See more...
Hi @shoaibalimir  Yes you can achieve this by using a bar chart, you need to separate the different status into series:   Please find dashboard studio example below: { "title": "StatusDash", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_11HT7uMG": { "dataSources": { "primary": "ds_wtNWwpS3" }, "options": { "dataValuesDisplay": "off", "legendDisplay": "off", "legendTruncation": "ellipsisMiddle", "seriesColorsByField": { "bigissue": "#ff0000", "online": "#008000", "smallissue": "#FFA500" }, "showIndependentYRanges": false, "showOverlayY2Axis": false, "showRoundedY2AxisLabels": false, "showSplitSeries": false, "showY2MajorGridLines": true, "stackMode": "stacked", "xAxisLabelRotation": 0, "xAxisTitleVisibility": "show", "y2AxisAbbreviation": "auto", "y2AxisTitleVisibility": "show", "yAxisAbbreviation": "auto", "yAxisTitleVisibility": "show" }, "type": "splunk.column" } }, "dataSources": { "ds_wtNWwpS3": { "name": "Column chart search", "options": { "query": "| makeresults count=60\n| streamstats count AS minute\n| eval _time = relative_time(now(), \"-1h\") + (minute-1)*60\n| eval success_count=case(\n minute IN (5, 21, 35, 48), 53, \n minute IN (14, 27, 42), 58, \n 1=1, 60\n)\n| eval success_pct = round(success_count/60*100, 1)\n| eval status = case(\n success_pct >= 99, \"online\",\n success_pct >= 95, \"smallissue\",\n success_pct < 95, \"bigissue\"\n)\n| table _time, minute, success_count, success_pct, status\n\n| timechart span=1m latest(success_pct) by status", "queryParameters": { "earliest": "-60m@m", "latest": "now" } }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_11HT7uMG", "position": { "h": 250, "w": 1440, "x": 0, "y": 10 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cheng2Ready , ok, it's a differenty solution and it's ok. about your search, you have to decide if you want to use the lookup command (as your original solution) or a subsearch using NOT [...],... See more...
Hi @Cheng2Ready , ok, it's a differenty solution and it's ok. about your search, you have to decide if you want to use the lookup command (as your original solution) or a subsearch using NOT [...], as my solution, but not the last solution that you shared. I prefer my solution because it's a best practice to move all the possible search conditions in the main search. Ciao. Giuseppe  
Hi @shoaibalimir , there's a visualization very similar to your one: the Swimline App ( https://splunkbase.splunk.com/app/3708 ). Otherwise, you could create a dashboard containing many panels, one... See more...
Hi @shoaibalimir , there's a visualization very similar to your one: the Swimline App ( https://splunkbase.splunk.com/app/3708 ). Otherwise, you could create a dashboard containing many panels, one for each region, but I'm not sure that it can be dinamic. Ciao. Giuseppe
Then I think definitely it something related to Log4j configuration or on Spark/Java side in which I have 0 experience, so I'm sorry I won't be able to help you, but I hope someone else in the commun... See more...
Then I think definitely it something related to Log4j configuration or on Spark/Java side in which I have 0 experience, so I'm sorry I won't be able to help you, but I hope someone else in the community will be able to help.
Hi all, I'm exploring ways to get a specific visualization on Splunk Dashboard, I have attached the screenshot as reference below: In this, if I hover on the yellow or orange bar, it should giv... See more...
Hi all, I'm exploring ways to get a specific visualization on Splunk Dashboard, I have attached the screenshot as reference below: In this, if I hover on the yellow or orange bar, it should give the SLA Value, and if I click on the arrow, categories, and further drilldown to the services, currently I've a setup in which SLA Value is represented by Gauge Widget Visualization. Please assist me if there's any method to get this kind of visualization. Many Thanks in advance!
@ProPoPop  You can increase the speed (throughput) of your Universal Forwarder, but this will also use more network bandwidth. To do this, change the maxKBps setting in the limits.conf file on the U... See more...
@ProPoPop  You can increase the speed (throughput) of your Universal Forwarder, but this will also use more network bandwidth. To do this, change the maxKBps setting in the limits.conf file on the Universal Forwarder. More details are here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf  I also recommend following this : https://community.splunk.com/t5/Splunk-Search/How-do-you-know-when-to-increase-the-bandwidth-limits/td-p/156419  It explains how to safely increase bandwidth and check its usage. You can monitor the bandwidth usage by checking the Universal Forwarder logs on the machine at: $SPLUNK_HOME/var/log/splunk/metrics.log  Or use this Splunk search to see the data: index=_internal source="*metrics.log" group=thruput
as per the monitoring console could see indexing queue and splunktcpin queue is high
Try increasing the maxKBps setting in the forwarder's limits.conf file.  It sounds like the default of 256 is not enough.  Try 512 or 0 (unlimited).
Have you enabled receiving of data in Splunk?  Go to Settings->"Forwarding and Receiving"  to turn on receiving. Does "localhost url" include the port number (9997 by default)? Do your firewalls al... See more...
Have you enabled receiving of data in Splunk?  Go to Settings->"Forwarding and Receiving"  to turn on receiving. Does "localhost url" include the port number (9997 by default)? Do your firewalls allow connections between Mulesoft and Splunk?
thank you for answer. I will try it
So if you have the stats that results in the number, e.g. .... | stats sum(amount) as total by source | eval total=total/7*.5 but without knowing what you've tried so far, this may or may not be us... See more...
So if you have the stats that results in the number, e.g. .... | stats sum(amount) as total by source | eval total=total/7*.5 but without knowing what you've tried so far, this may or may not be useful. It's often good to show what you've done with your SPL along with an explanation of your problem as it helps us understand where you're trying to get to.  
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in th... See more...
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in the dashboard when i search like index="indexname". LOG4J2.XML FILE CHANGES <Configuration status="INFO" name="cloudhub" packages="com.mulesoft.ch.logging.appender,com.splunk.logging,org.apache.l ogging.log4j"> <Appenders> <RollingFile "Rolling file details here" </RollingFile> <SplunkHttp name="Splunk" url="localhost url" token="token" index="indexname" batch_size_count="10" disableCertificateValidation="true"> <PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n" /> </SplunkHttp> <Log4J2CloudhubLogAppender name="CloudHub" addressProvider="com.mulesoft.ch.logging.DefaultAggregatorAddressProvider" applicationContext="com.mulesoft.ch.logging.DefaultApplicationContext" appendRetryIntervalMs="${sys:logging.appendRetryInterval}" appendMaxAttempts="${sys:logging.appendMaxAttempts}" batchSendIntervalMs="${sys:logging.batchSendInterval}" batchMaxRecords="${sys:logging.batchMaxRecords}" memBufferMaxSize="${sys:logging.memBufferMaxSize}" journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}" journalMaxFileSize="${sys:logging.journalMaxFileSize}" clientMaxPacketSize="${sys:logging.clientMaxPacketSize}" clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}" clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}" serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}" serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}" statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}"> </Log4J2CloudhubLogAppender> </Appenders>   <Loggers> <AsyncLogger name="org.mule.service.http" level="WARN" /> <AsyncLogger name="org.mule.extension.http" level="WARN" /> <AsyncLogger name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO" /> <AsyncRoot level="INFO"> <AppenderRef ref="file" /> <AppenderRef ref="Splunk" /> <AppenderRef ref="CloudHub" /> </AsyncRoot> <AsyncLogger name="Splunk.Logger" level="INFO"> <AppenderRef ref="splunk" /> </AsyncLogger> </Loggers> </Configuration>   POM.XML FILE CHANGES   <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository>   <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.7.3</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.10.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.10.0</version> </dependency>   Please let me know if i am missing out on any configuration since i believe i am pretty much following what's in the mule website and other articles.
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no pro... See more...
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no problems with sending, but sometimes we are "losing" logs.  DCs have very big load of data, and when the amount of logs reach the top they are start to overwriting oldest logs. As I noticed, that problem affected only at Windows Security journals. Problem: Splunk Universal Forwarder  doesn't has time to get and send data from DC before logs got been overwritten. Could this be related to the Splunk process execution priority or load from other processes at DC? How to solve this problem? Do you have the same experience or advices to rid this problem?
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to b... See more...
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to be calculated based on the total amount(Whatever I am counting)/7(number of days)*.5. For example, the total amount of a certain week is 700 which means the calculation to the above equation is 50. If the source in question is below 50 then it would be red, above that would be yellow and what ever I made for green. How can I calculate that when I have multiple sources being used. If this doesnt make sense I can provide some data. Thank you in advance for your time and help.
There is yet another thing which can sometimes hint at whether you're getting data onto one or the other endpoint. With the /event endpoint you can push indexed fields. So if you have some non-raw-ba... See more...
There is yet another thing which can sometimes hint at whether you're getting data onto one or the other endpoint. With the /event endpoint you can push indexed fields. So if you have some non-raw-based fields which obviously weren't extracted/calculated in the ingestion pipeline (but for this you'd have to dig through your index-time configs) that would singly suggest you're getting data via/event endpoint.
Hi @CMAzurdia  My apologies, I thought you were referring to people logging in to Splunk using SAML. Ultimately this depends on what the data looks like when you receive it in Splunk? Its worth sta... See more...
Hi @CMAzurdia  My apologies, I thought you were referring to people logging in to Splunk using SAML. Ultimately this depends on what the data looks like when you receive it in Splunk? Its worth starting by identifying key fields in your data and then filtering down until you just get the events you are interested in - from here you can work through your usecase/requirements.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This is true @sverdhan  - As @PickleRick has said, this might not cover everything you're expecting.  I spent a big chunk of time once trying to find "every" combination for a project I was working ... See more...
This is true @sverdhan  - As @PickleRick has said, this might not cover everything you're expecting.  I spent a big chunk of time once trying to find "every" combination for a project I was working on to automatically notify of people doing this, however they often found clever ways around, things like using inputlookup, makeresults etc in subsearches. However, it might catch "most" of your queries - ultimately your mileage may vary!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello @VatsalJagani , Yes, we checked via Curl curl -k -X POST 'https://hec-splunk.xxxxx.net/services/collector/event' --header 'Authorization: Splunk xxxx-xxxx-xxxx-xxx-xxxx' -d '{"sourcetype": "... See more...
Hello @VatsalJagani , Yes, we checked via Curl curl -k -X POST 'https://hec-splunk.xxxxx.net/services/collector/event' --header 'Authorization: Splunk xxxx-xxxx-xxxx-xxx-xxxx' -d '{"sourcetype": "my_sample_data", "event": "2025-04-23-Test"}'   Result: {"text":"Success","code":0}%   And we can see the event in Splunk