All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Machine agent is not starting and crashes with an error: [Agent-Monitor-Scheduler-2] 26 Aug 2021 00:02:14,981 INFO MonitorExecutorServiceModule - Initializing the ThreadPool with size 2 [Agent-Moni... See more...
Machine agent is not starting and crashes with an error: [Agent-Monitor-Scheduler-2] 26 Aug 2021 00:02:14,981 INFO MonitorExecutorServiceModule - Initializing the ThreadPool with size 2 [Agent-Monitor-Scheduler-2] 26 Aug 2021 00:02:14,982 ERROR PeriodicTaskRunner - Error creating environment task java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to java.util.List at com.appdynamics.extensions.conf.modules.HttpClientModule.initHttpClient(HttpClientModule.java:29) at com.appdynamics.extensions.conf.MonitorConfiguration.loadConfigYml(MonitorConfiguration.java:181) at com.appdynamics.extensions.conf.MonitorConfiguration$1.onFileChange(MonitorConfiguration.java:106) at com.appdynamics.extensions.conf.modules.FileWatchListenerModule.createListener(FileWatchListenerModule.java:37) at com.appdynamics.extensions.conf.MonitorConfiguration.setConfigYml(MonitorConfiguration.java:112) at com.appdynamics.extensions.conf.MonitorConfiguration.setConfigYml(MonitorConfiguration.java:116) at com.appdynamics.extensions.ABaseMonitor.initialize(ABaseMonitor.java:94) at com.appdynamics.extensions.ABaseMonitor.execute(ABaseMonitor.java:127) at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:148) at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.runTask(PeriodicTaskRunner.java:86) at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.run(PeriodicTaskRunner.java:47) at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:119) at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:206) at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:236) at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) at java.lang.Thread.run(Thread.java:745)
Hi All, Can anyone advise me on below I have Windows Application logs disabled already but I need one event ID that should be allowed. 
I have setup an event collector and have endpoints configured in cloud trial version. I am unable to send request to the endpoint unless SSL certificate verification is disabled. I need SSL to be ena... See more...
I have setup an event collector and have endpoints configured in cloud trial version. I am unable to send request to the endpoint unless SSL certificate verification is disabled. I need SSL to be enabled so that I can make external requests.   Is there any way we could enable SSL in cloud free trial account? It seems like there is no option to do so. Any help is appreciated. Thanks in advance.
I'm going to check the permission and rejection of the scan attack per hour. At this point, what I wrote... Which is appropriate, Vlaues or the list?
Also, which one is suitable, stats or stream st... See more...
I'm going to check the permission and rejection of the scan attack per hour. At this point, what I wrote... Which is appropriate, Vlaues or the list?
Also, which one is suitable, stats or stream stats? index="firewall" (action="allow" OR action="deny" ) AND ( attack="*scan") | bin _time span=1d | stats count by _time,src_ip,dest_ip,app | stats values(dest_ip) AS dest_ip , values(count) AS count by _time,src_ip,app | table _time, src_ip ,app, dest_ip , count
Hi, I have a lookupfile that contains a list of hosts, (one column named hosts), this list maybe subject to change. I want to complete a search that will compare this lookup file to hosts in an... See more...
Hi, I have a lookupfile that contains a list of hosts, (one column named hosts), this list maybe subject to change. I want to complete a search that will compare this lookup file to hosts in any specific index and return a table showing ok or missing if there is no match. All searches I have attempted so far are happy to return either or, is the only option here to rename the field in the hostfile or any suggestions on how to complete this? host (from lookup file) host (from index) match host1 host1 ok host2   missing host3 host3 ok
I have a multisite (specifically, two site) indexer cluster running 7.3, and a search head cluster with search affinity disabled per the recommendation here. I obviously need to upgrade all of those ... See more...
I have a multisite (specifically, two site) indexer cluster running 7.3, and a search head cluster with search affinity disabled per the recommendation here. I obviously need to upgrade all of those tiers to 8.1 very soon, given the end-of-support dates for 7.3 and 8.0. Ideally I would like to perform that upgrade without an outage. The best option for this appears to be a site-by-site upgrade, especially as I happen not to have any metrics indexes. Those instructions state that I must first upgrade my cluster manager node to 8.0, then upgrade the indexers in one site to 8.0, and then upgrade the search heads in that same site to 8.0 (then repeat for the other site, then repeat the whole thing to get to 8.1). How does this apply when the search heads are all set to site0? In contrast, the instructions for upgrading each tier separately say to upgrade the manager, then the search heads, then the indexers. In what order should I do this upgrade? Do the search heads also have to go from version 7.3 to 8.0 to 8.1 like the cluster manager and indexers do?
I need to learn the process of configuring an app to use a certain Index please. Thank u 
I have this log { [-]    duration: 3005    finishTime: 2021-08-25T15:47:26.838196    logger: splunk    startTime: 2021-08-25T15:47:23.832269    stepTransitionDuration: [ [+]    ]    traceStep... See more...
I have this log { [-]    duration: 3005    finishTime: 2021-08-25T15:47:26.838196    logger: splunk    startTime: 2021-08-25T15:47:23.832269    stepTransitionDuration: [ [+]    ]    traceSteps: [ [-]      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [+]      }      { [-]        logType: info        message: Response        res: { [-]          httpStatus: 400        }        timestamp: 2021-08-25T15:47:26.838195        title: Response      }    ]    xrayTraceId: }   How can i access only the last item (in bold letter)? I tried to access with mvindex but return blank results.
F.ex. when using NLog file target: https://github.com/NLog/NLog/wiki/File-target   What's the optimal performance way for creating log files for the Forwarder? One record per file (timestamp + gui... See more...
F.ex. when using NLog file target: https://github.com/NLog/NLog/wiki/File-target   What's the optimal performance way for creating log files for the Forwarder? One record per file (timestamp + guid.json)...which would create a lot of files. Or perhaps logging every second (multiple records per log file), but what about file locks? I don't want Splunk Forwarder to fight with nlog about who has a lock on the file.   What's the optimal best performance solution for creating log files to avoid file locks?
Hello,   I have a simple dashboard that has 2 panels: 1)Types of dashboards (single value component defining count of each type) 2)Drilldown for 1st panel to show details  I want to convert ... See more...
Hello,   I have a simple dashboard that has 2 panels: 1)Types of dashboards (single value component defining count of each type) 2)Drilldown for 1st panel to show details  I want to convert 1st panel such that,  the font size of name of the dashboard type should vary according to the count.   color would be the category of dashboard and the count (number of times it has been accessed) would lead to size of the text.   I checked for dashboard studio for this purpose but couldn't find out perfect solution. Tried to make one sample dashboard that i want in below screenshot     Thanks in advance!    
Hey Splunk gang,  I have a dashboard that I am creating and it will ingest a file every 5 minutes.  I need to create a search that will accumulate the value of an extracted field.  ie.) Extracted fi... See more...
Hey Splunk gang,  I have a dashboard that I am creating and it will ingest a file every 5 minutes.  I need to create a search that will accumulate the value of an extracted field.  ie.) Extracted field = ACA, and it comes in the first time at 10, and then the second time(5 minutes later) at 15 and the dashboard displays 25.  Ideally in a single value panel.  Here is the search that produces the original value, but it does not accumulate a total: | rename "Amt Credits Acc" as "ACA" | fieldformat ACA = ("$".ACA) | table "ACA"
Hi All, we have a query as below  (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) by Local_M... See more...
Hi All, we have a query as below  (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) by Local_Market | rename count(Local_Market) as Total_Blocked | addcoltotals col=t labelfield=Local_Market label="Total Blocked" | append [search (index=abc OR index=def) blocked =0 | rex field=index "(?<Local_Market>\w.*?)_" | stats count by Local_Market | rename count as Detected_Count | addcoltotals col=t labelfield=Local_Market label="Total Detected")] Local_market    total Blocked     total detected Germany            20 ghana                   80 India                     91 total Blocked   191 Germany                                                 10 Ghana                                                       20 India                                                           10 total detected                                        40 i want data like Local_Market        Germany   ghana   India   Total total Blocked         20                80           91      191 Total Detected      10               20            10       40
I am trying to download free trial version of splunk. I have a Windows 10 64 bit. When I download, it takes me to the licensing agreement but from here there is no place to download the software. Any... See more...
I am trying to download free trial version of splunk. I have a Windows 10 64 bit. When I download, it takes me to the licensing agreement but from here there is no place to download the software. Any advice?
Hi,   I have finally got my search to work that compares data between index and lookup (csv) file that contains assets name and provide output of assets found in the index as well as CSV based off ... See more...
Hi,   I have finally got my search to work that compares data between index and lookup (csv) file that contains assets name and provide output of assets found in the index as well as CSV based off some EVALs index=myindex ASSETS [ | inputlookup linuxhostnames.csv | eval hostname="*".hostname."*" | rename hostname as DNS ] | dedup DNS | eval Agent=if(like(TAG, "%NonProd%"), "Yes - NonProd", "No Agent") | eval Location=if(like(TAG, "%DataCenter%"), "Data Center", "Not in DC") | where Agent="No Agent" | table DNS, IP, OS, Location, TAG   even if i remove the eval statements - the asset output is less than the total count in the .csv So it's listing ONLY the assets that are found in BOTH csv and index. How can I generate a table that will show assets that are not in the index but are in the CSV?   Thank you
I want to try a search for "9.com" However the results return 89.com,five9.com,guru99.com   How to execute this. Please help
I'm using the v20.5 java agent. Is there a way to trigger and download a thread dump via the API?  I don't see one. In the UI, we can do something like this: Go to node Go to agent tab Request ... See more...
I'm using the v20.5 java agent. Is there a way to trigger and download a thread dump via the API?  I don't see one. In the UI, we can do something like this: Go to node Go to agent tab Request agent log files Collect x thread dumps y millis apart Download resulting files I would like to automate that. thanks
I am planning an upgrade to version 8.2.0 from 8.0.6. According to the documentation, I should be validating that my add-ons and apps are compatible with Python 3. I ran both the Python Upgrade Readi... See more...
I am planning an upgrade to version 8.2.0 from 8.0.6. According to the documentation, I should be validating that my add-ons and apps are compatible with Python 3. I ran both the Python Upgrade Readiness Check and the Splunk Platform Upgrade Readiness App; they both indicated that even the newest version of the Splunk Add-on for Microsoft Windows, 8.1.2, failed the Python 3 check. Has anyone encountered this, and if you upgraded anyway, have you run into any issues? According to the Splunk Platform Upgrade Readiness App, the add-on failed at: Check 7: Python scripts Status: Warning Required action: Update these Python scripts to be dual-compatible with Python 2 and 3. File path: ...\bin\log.py   Is this something to be concerned about?
Hi, I have a problem and I can review event exists a disface between variable time extract and "_time" on SPL file: T_LOGFILE_VORDEL_ANSWER_SLA10_1;0;08/25/2021 09:03:08 on SPLUNK  
Hi folks, I am seeking some assistance with the formatting of forwarded splunk data to a 3rd party collector, we have managed to get everything forwarding fine by configuring C:\Program Files\Splunk... See more...
Hi folks, I am seeking some assistance with the formatting of forwarded splunk data to a 3rd party collector, we have managed to get everything forwarding fine by configuring C:\Program Files\Splunk\etc\system\local\outputs.conf [syslog] defaultGroup=syslogGroup maxEventSize = 65535 [syslog:syslogGroup] server = IPAddress:514 type = tcp   The problem is that all (windows logs only) we get every field of a log as a separate event that multiplies traffic drastically. I read briefly about line breaking but not sure how to configure this and we only have a live environment and wouldn’t want to make any changes that could potentially break our existing Splunk instance as it’s used heavily by all our I.T departments.   Any advice would be appreciated.   Cheers!
I have the following sourcers: "inserted" and "deleted" In the "inserted" i have these fields: Id, Timestamp 1, 2021-08-18T19:39:31.3003273 2, 2021-08-18T02:25:05.786293 3, 2021-08-18T19:39:31.3... See more...
I have the following sourcers: "inserted" and "deleted" In the "inserted" i have these fields: Id, Timestamp 1, 2021-08-18T19:39:31.3003273 2, 2021-08-18T02:25:05.786293 3, 2021-08-18T19:39:31.301158 etc.... In the "deleted" I have the same fields: Id, Timestamp 1, 2021-08-18T19:39:31.3003234 1, 2021-08-18T19:28:00.8425431 1, 2021-08-18T19:27:07.2603396 2, 2021-08-18T18:57:52.3556542 2, 2021-08-18T15:06:19.3365628 3, 2021-08-18T15:06:02.5264226 3, 2021-08-18T12:06:29.5371453 3, 2021-08-18T11:55:40.7562728 3, 2021-08-18T03:22:06.3672773 I need to filter the events in the 'inserted' that are newer than in the 'deleted', where the Id's are the same in both sourcers and the timestamp in the 'inserted' is greater than the Timestamp in the 'deleted'. I've managed to set up a search for one ID and manually setting the last timestamp that I found in the 'deleted', as per below:     index=something source=inserted Id=1 | eval data_inserted = strptime('Timestamp', "%Y-%m-%dT%H:%M:%S.%Q") | eval data_deleted = "2021-08-18T19:39:31.3003234" | eval data_deleted = strptime('data_deleted', "%Y-%m-%dT%H:%M:%S.%Q") | where data_inserted > data_deleted     My goal and help needed is on how to do this automatically for the IDs and Timestamp I have in the source='deleted'. Your help is very much appreciated. Thank you!