All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?
Amazing! Thank you. Yes I misunderstood macros.
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The log... See more...
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The logs that needs to be monitored are from some applications installed inside the container. As mentioned we have around 5-6 containters.
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are... See more...
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are there any "failed" status as well?
oh, this is not a question.. this is a solution, i see.  thanks for sharing. 
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after sp... See more...
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after splunkd re-created all the 2-byte index .dat files under the old location  $SPLUNK_HOME/var/lib/splunk/ Maybe I should have used a symbolic link to relocate the index DB instead of defining a new DB location in splunk-launch.conf
Hi, Are there plans to upgrade the html to be compatible with Splunk 9.1?   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Updating_deprecated_HTML_dashboards
@ITWhisperer This is what I imagine it should look like  but im not sure if there is a way to add in a condition for Streamstats  for this command?  or a workaround? "reset_on_change= if (status="... See more...
@ITWhisperer This is what I imagine it should look like  but im not sure if there is a way to add in a condition for Streamstats  for this command?  or a workaround? "reset_on_change= if (status="UP", 1, 0)  " | bucket span=1m _time | eval status_change=if(status="DOWN",1,0) | streamstats sum(status_change) as down_count  reset_on_change= if (status="UP", 1, 0) | eval is_alert=if(down_count >=5 AND status="DOWN",1,0) | where is_alert=1
@ITWhisperer want an alert if there has been a period for every1 minute of at least 5 minutes of Status being "Down" and if its interrupted with a status = Up then it resets the count and will not al... See more...
@ITWhisperer want an alert if there has been a period for every1 minute of at least 5 minutes of Status being "Down" and if its interrupted with a status = Up then it resets the count and will not alert regarding the amount of event counts
Your data does not match your description - the Status field appears to be either "up" or "Down" not "true" - because of this, it is not clear whether you want an alert if there has been a period of ... See more...
Your data does not match your description - the Status field appears to be either "up" or "Down" not "true" - because of this, it is not clear whether you want an alert if there has been a period of at least 5 minutes of Status being "Down" or Status being "up" anywhere within the time period of the search - please clarify your requirement
I get this too.  In Splunkd.log, we see the shutdown process, but then it just... doesn't shut down... until it times out. Looks like the shutdown process completes, but the HttpPubSubConnection ke... See more...
I get this too.  In Splunkd.log, we see the shutdown process, but then it just... doesn't shut down... until it times out. Looks like the shutdown process completes, but the HttpPubSubConnection keeps going. Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_Tailing" 08-15-2024 22:27:57.171 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="TailingProcessor" 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182482 Shutdown] - Will reconfigure input. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182482 Shutdown] - Calling addFromAnywhere in TailWatcher=0x7f4e53dfd a10. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Shutting down with TailingShutdownActor=0 x7f4e77429300 and TailWatcher=0x7f4e53dfda10. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Pausing TailReader module... 08-15-2024 22:27:57.171 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 0 to 1 (pseudoPause). 08-15-2024 22:27:57.171 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 0 to 1 (pseudoPause). 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Removing TailWatcher from eventloop... 08-15-2024 22:27:57.176 +0000 INFO TailingProcessor [182712 MainTailingThread] - ...removed. 08-15-2024 22:27:57.176 +0000 INFO TailingProcessor [182712 MainTailingThread] - Eventloop terminated successfully. 08-15-2024 22:27:57.177 +0000 INFO TailingProcessor [182712 MainTailingThread] - Signaling shutdown complete. 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 1 to 2 (signalShutdown). 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - Shutting down batch-reader 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 1 to 2 (signalShutdown). 08-15-2024 22:27:57.177 +0000 INFO Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_IdataDO_Collector" 08-15-2024 22:27:57.177 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="IdataDO_Collector" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_PeerManager" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="BundleStatusManager" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="DistributedPeerManager" 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182624 TcpPQReaderThread] - TcpInput queue shut down cleanly. 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182624 TcpPQReaderThread] - Reader thread stopped. 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182623 TcpListener] - TCP connection cleanup complete 08-15-2024 22:28:52.001 +0000 INFO HttpPubSubConnection .....  ... ... INFO IndexProcessor [199494 MainThread] - handleSignal : Disabling streaming searches. Splunk continues to write log lines from HttpPubSubConnection - Running phone.... after the Shutdown, nothing else shows up in the logs.  I re-ran "./splunk stop" in another session, and it finally logged one more line and actually stopped.
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it searc... See more...
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it search for every 1minute. for example  this should not fire an Alert because it recovered within the 5 min 1:00 Status = Down   (event result count X5) 1:03 Status = up 1:07 Status = Down  (event count X3) 1:10 Status = up 1:13 Status = up 1:16 Status = up for example  this should  fire an Alert  1:00 Status = Down  (event result count X1) 1:03 Status = Down (event result count X1) 1:07 Status = Down (event result count X1) 1:10 Status = up 1:13 Status = up 1:16 Status = up
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to ... See more...
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to set MAX_EVENTS).  Good thing you noticed line numbers.  It took me like 2 years.  See my post in Getting Data In.
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting ... See more...
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting top the 3 values instead of 5 values as you provided the table.  The Json event I provided is a trauncated, the actual number of lines in JSON format is around 959 Lines. So Is there any limit setting on the search head to analyze whole event?
Well, safest way to get those values would be probably to either use summary indexing or schedule separate searches for each count and then append their results with loadjob. But if your fields are ... See more...
Well, safest way to get those values would be probably to either use summary indexing or schedule separate searches for each count and then append their results with loadjob. But if your fields are easily obtainable with PREFIX, you could use tstats to do quick separate tstats-based searches and append them together. You could also - as I said earlier try to simply do count by all of those four parameters and then do eventstats but that might give you too many results to aggregate (if every user can hit each netscaler, each site and so on, that can get into relatively high numbers; but might be worth a try).
a few hundred
Yes, but you're (luckliy) not counting by sessionID. You're counting by other stuff - storefront, netscaler, site and user. I suppose the user field will have most values. Question is how many - hund... See more...
Yes, but you're (luckliy) not counting by sessionID. You're counting by other stuff - storefront, netscaler, site and user. I suppose the user field will have most values. Question is how many - hundreds? Thousands? Millions?
Each user has a unique sessionid that connects to one Storefront, on netscaler, and one site.     Let dig into eventstatus.
Searches using subsearches (maybe with an exception of multisearch) are extremely tricky to troubleshoot due to limits on subsearches. That seems to be a very weird way to calculate four separate st... See more...
Searches using subsearches (maybe with an exception of multisearch) are extremely tricky to troubleshoot due to limits on subsearches. That seems to be a very weird way to calculate four separate statistics using some syntactic "glue". What is the cardinality of each of your sources/targets? (Netscaler, site, UserName, Storefront) Maybe it would be more natural to just do a simple count over all of them and then simply eventstats sum over some combinations?
Hi @praveen.K R, Since the Community was not able to jump in and help. You can contact Cisco AppDynamics Support for more help. AppDynamics is migrating our Support case handling system to Cisco S... See more...
Hi @praveen.K R, Since the Community was not able to jump in and help. You can contact Cisco AppDynamics Support for more help. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.