All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all,  I have a dashboard that utilizes a dynamic panel for loading different tables depending on which link is clicked.  However, the links are all garbled.  The first 4 display, but the oth... See more...
Hello all,  I have a dashboard that utilizes a dynamic panel for loading different tables depending on which link is clicked.  However, the links are all garbled.  The first 4 display, but the others are underneath the dynamic panel.  If you right arrow after clicking one of the links, you can get to each link, but the display is all messed up.   How do I get all of the links into one single line? Thanks!  
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (pl... See more...
I'm trying to configure Windows DNS/DSC server logs into Splunk. I'm done with Audit logs but the Operational logs are creating Error in splunk which i think is may be because of %4 in its name. (please refer image). Is there any other way to get these logs into splunk as i tried it with * (wildcard) as well. message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC_Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC-Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC*' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC%4Operational' message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-DSC Operational'   Tried almost all combinations.
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpoin... See more...
I have used this regex - \^([^=]+)=([^^]*) Apr 23 21:43:22 3.111.9.101 CEF:0|Seqrite|EPS|5.2.1.0|Data Loss Prevention Event|^|channelType=Applications/Online Services^domainName=AVOTRIXLABS^endpointName=ALEI5-ANURAGR^groupName=Default^channelDetail=Microsoft OneDrive Client^documentName=^filePath=C:\Users\anurag.rathore.AVOTRIXLABS\OneDrive - Scanlytics Technology\Documents\git\splunk_prod\deployment-apps\Fleet_Management_Dashboard\appserver\static\fontawesome-free-6.1.1-web\svgs\solid\flask-vial.svg^macID1=9C-5A-44-0A-26-5B^status=Success^subject=^actionId=Skipped^printerName=^recipientList=^serverDateTime=Wed Apr 23 16:13:57 UTC 2025^matchedItem=Visa^sender=^contentType=Confidential Data^dataId=Client Application^incidentOn=Wed Apr 23 16:07:38 UTC 2025^ipAddressFromClient=***.***.*.16^macID2=00-FF-58-34-31-0E^macID3=B0-FC-36-CA-1C-73^userName=anurag.rathore  it is able to extract all field correctly Except a few fields . Here documentName should be empty but it is showing this on search time.  
Is there any metric or any other way to monitor License count of AppDynamics usage rather not looking into actual License page in controller console. 
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ... See more...
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ticket traffic. Does anyone have experience with sending a large number of Jira tickets (thousands or more) to Splunk on a regular basis? -Are there limits or performance issues to be aware of? -Should I split the integration by project, or is it manageable in a single pipeline? -Are there any best practices for optimizing ingestion and storage in Splunk in such cases? Any insights or shared experiences would be highly appreciated. Thanks in advance!
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best ... See more...
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best practices or tools for doing this efficiently, especially across multiple Jira projects?
i installed splunk in distributed management environment. furthermore, my indexer server got reboot and i can't query my data even though at index = _internal. whereas previously it was fine.
hi all, so I installed IT Essentials Work and am getting error 500. It is also displayed as ITSI under Apps and not as IT Essentials Work. Also, there are 2 licenses available for some reason. I did... See more...
hi all, so I installed IT Essentials Work and am getting error 500. It is also displayed as ITSI under Apps and not as IT Essentials Work. Also, there are 2 licenses available for some reason. I didn't add any. I don't have a license for ITSI, nor want to use ITSI. The app is installed via CLI, following Splunk's guidelines. Any suggestions what the reason could be? there is a dev license used as well so not sure if this has anything to do with the issue. Thanks!  
Good day Everyone,  Currently trying to instrument Oracle Forms and Reports for some of the critical applications that are used within my enterprise, The way I'm configuring it is as follows:  ... See more...
Good day Everyone,  Currently trying to instrument Oracle Forms and Reports for some of the critical applications that are used within my enterprise, The way I'm configuring it is as follows:  1. Launch the WebLogic Server 2. Enter the Managed Server (1 for Forms and 1 for Reports) 3. Add -javaagent arguments to the "arguments" section for each of the managed servers and restart the managed servers themself.  What this gives me:  1. I am able to see Forms and Reports discovered as separate tiers within AppDynamics, showing Servelet calls.  2. I am able to see the RCU database which is required for the Weblogic startup that contains Metadata.  What I am missing and trying to achieve:  1. I am missing the connection to the actual transactional database that works with this application. 2. Once the above point is complete, to use data collectors for transaction snapshots to output the User ID and help co-relate exactly which Forms are working slowly or having errors so that this can be resolved by the Application team.  If the community has any information on custom instrumentation that can be done to achieve the two points mentioned above, that would be extremely helpful, insights from previous tries to instrument Oracle Forms and Reports are also welcome and highly appreciated.  Also please feel free to let me know if monitoring the Admin Server itself via Startup Script might be the way forward, to resolve the issues mentioned above, as well as any information or guidance towards monitoring via the OPMN file that was mentioned in another forum which I've been reading on, if anybody has the experience. All AppDynamics Discussions posts - Link to the forum mentioning OPMN.xml file configuration. Thank you for taking the time to read and sharing your thoughts! JVM Instrumentation Agent  
I have two separate Splunk cloud instance and want to forward specific set of data from one instance to another. Please suggest the approach or any app/add-on available for this purpose. 
Hi all, I'm exploring ways to get a specific visualization on Splunk Dashboard, I have attached the screenshot as reference below: In this, if I hover on the yellow or orange bar, it should giv... See more...
Hi all, I'm exploring ways to get a specific visualization on Splunk Dashboard, I have attached the screenshot as reference below: In this, if I hover on the yellow or orange bar, it should give the SLA Value, and if I click on the arrow, categories, and further drilldown to the services, currently I've a setup in which SLA Value is represented by Gauge Widget Visualization. Please assist me if there's any method to get this kind of visualization. Many Thanks in advance!
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in th... See more...
Hi, I have created a new token and index in splunk for my mulesoft project. These are the configurations I have done in mulesoft to get the splunk logs.Despite this I am unable to see any logs in the dashboard when i search like index="indexname". LOG4J2.XML FILE CHANGES <Configuration status="INFO" name="cloudhub" packages="com.mulesoft.ch.logging.appender,com.splunk.logging,org.apache.l ogging.log4j"> <Appenders> <RollingFile "Rolling file details here" </RollingFile> <SplunkHttp name="Splunk" url="localhost url" token="token" index="indexname" batch_size_count="10" disableCertificateValidation="true"> <PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n" /> </SplunkHttp> <Log4J2CloudhubLogAppender name="CloudHub" addressProvider="com.mulesoft.ch.logging.DefaultAggregatorAddressProvider" applicationContext="com.mulesoft.ch.logging.DefaultApplicationContext" appendRetryIntervalMs="${sys:logging.appendRetryInterval}" appendMaxAttempts="${sys:logging.appendMaxAttempts}" batchSendIntervalMs="${sys:logging.batchSendInterval}" batchMaxRecords="${sys:logging.batchMaxRecords}" memBufferMaxSize="${sys:logging.memBufferMaxSize}" journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}" journalMaxFileSize="${sys:logging.journalMaxFileSize}" clientMaxPacketSize="${sys:logging.clientMaxPacketSize}" clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}" clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}" serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}" serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}" statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}"> </Log4J2CloudhubLogAppender> </Appenders>   <Loggers> <AsyncLogger name="org.mule.service.http" level="WARN" /> <AsyncLogger name="org.mule.extension.http" level="WARN" /> <AsyncLogger name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO" /> <AsyncRoot level="INFO"> <AppenderRef ref="file" /> <AppenderRef ref="Splunk" /> <AppenderRef ref="CloudHub" /> </AsyncRoot> <AsyncLogger name="Splunk.Logger" level="INFO"> <AppenderRef ref="splunk" /> </AsyncLogger> </Loggers> </Configuration>   POM.XML FILE CHANGES   <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository>   <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.7.3</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.10.0</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.10.0</version> </dependency>   Please let me know if i am missing out on any configuration since i believe i am pretty much following what's in the mule website and other articles.
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no pro... See more...
Hello team! We have a problem with sending data from several Domain Controllers to our splunk instance. We are collecting and sending logs by using Splunk Universal Forwarder. Often there are no problems with sending, but sometimes we are "losing" logs.  DCs have very big load of data, and when the amount of logs reach the top they are start to overwriting oldest logs. As I noticed, that problem affected only at Windows Security journals. Problem: Splunk Universal Forwarder  doesn't has time to get and send data from DC before logs got been overwritten. Could this be related to the Splunk process execution priority or load from other processes at DC? How to solve this problem? Do you have the same experience or advices to rid this problem?
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to b... See more...
Hello I am new to Splunk and I am hoping someone will be able to help me out with a problem. I am creating a Heatmap and the values isnt dynamic enough. Like I want the values of the heat graph to be calculated based on the total amount(Whatever I am counting)/7(number of days)*.5. For example, the total amount of a certain week is 700 which means the calculation to the above equation is 50. If the source in question is below 50 then it would be red, above that would be yellow and what ever I made for green. How can I calculate that when I have multiple sources being used. If this doesnt make sense I can provide some data. Thank you in advance for your time and help.
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, inc... See more...
Hello,  Is there a specific way to set the color to specific colors based on the specific field? I have a stacked column chart with four possible fields used (nonconformance, part_not_installed, incomplete_task, shortage in this case) and I use a token system to only show specific fields in my chart based on those selections.  It seems that the color order is set by default. So when I remove the first option the color changes to the new top field name. Is there an easy way to lock those to the consistently be the same colors for each potential option. So locking field color with field value.    example images of the change.  vs when the fields change   
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating syste... See more...
We are a big customer. We hit a big issue in upgrading from 9.1.7 to 9.2.4 and it took a long time for the issue to be resolved. We have a large stack with many indexers. Our current operating system is RedHat 7; we are in the process of migrating to RedHat 8. On upgrade from 9.1.7 to 9.2.4, one of the indexer clusters that ingests the most amount of data, suddenly had aggregation and parsing queues filled at 100% during our peak logging hours. The indexers were not using much more cpu or memory it’s just that the queues were very full. It turns out that Splunk has enabled profiling starting in 9.2: specifically cpu time profiling. These settings are controlled in limits.conf: https://docs.splunk.com/Documentation/Splunk/9.2.4/Admin/Limitsconf. There are 6 new profiling metrics and these are all enabled by default. In addition, the agg_cpu_profiling runs a lot of time of day routines.  A lot. There are several choices for clocksource in RedHat https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/chap-Timestamping#sect-Hardware_clocks It turns out that we had set our clock source to use the clocksource “hpet” some number of years ago. This clocksource, while high precision, is much slower than using “tsc”.  Once we switched to using tsc, the problem with our aggregation and parsing queues at 100% during peak hours was fixed. Even if you don't have the clock source issue, the change in profiling is something to be aware of in the upgrade to 9.2
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to ... See more...
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to tell based on something like the token, sourcetype, source, or host. I have tried searching the _internal index and have not found anything helpful.
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the resul... See more...
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the results. Any additional information needed, please let me know. Do I need to extract a field of all the users using SAML? v/r cmazurdia
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. Fo... See more...
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match. Reinstalling the affected forwarders did not help. Do you have a solution?        
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?... See more...
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?  thanks