All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it searc... See more...
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it search for every 1minute. for example  this should not fire an Alert because it recovered within the 5 min 1:00 Status = Down   (event result count X5) 1:03 Status = up 1:07 Status = Down  (event count X3) 1:10 Status = up 1:13 Status = up 1:16 Status = up for example  this should  fire an Alert  1:00 Status = Down  (event result count X1) 1:03 Status = Down (event result count X1) 1:07 Status = Down (event result count X1) 1:10 Status = up 1:13 Status = up 1:16 Status = up
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to ... See more...
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to set MAX_EVENTS).  Good thing you noticed line numbers.  It took me like 2 years.  See my post in Getting Data In.
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting ... See more...
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting top the 3 values instead of 5 values as you provided the table.  The Json event I provided is a trauncated, the actual number of lines in JSON format is around 959 Lines. So Is there any limit setting on the search head to analyze whole event?
Well, safest way to get those values would be probably to either use summary indexing or schedule separate searches for each count and then append their results with loadjob. But if your fields are ... See more...
Well, safest way to get those values would be probably to either use summary indexing or schedule separate searches for each count and then append their results with loadjob. But if your fields are easily obtainable with PREFIX, you could use tstats to do quick separate tstats-based searches and append them together. You could also - as I said earlier try to simply do count by all of those four parameters and then do eventstats but that might give you too many results to aggregate (if every user can hit each netscaler, each site and so on, that can get into relatively high numbers; but might be worth a try).
a few hundred
Yes, but you're (luckliy) not counting by sessionID. You're counting by other stuff - storefront, netscaler, site and user. I suppose the user field will have most values. Question is how many - hund... See more...
Yes, but you're (luckliy) not counting by sessionID. You're counting by other stuff - storefront, netscaler, site and user. I suppose the user field will have most values. Question is how many - hundreds? Thousands? Millions?
Each user has a unique sessionid that connects to one Storefront, on netscaler, and one site.     Let dig into eventstatus.
Searches using subsearches (maybe with an exception of multisearch) are extremely tricky to troubleshoot due to limits on subsearches. That seems to be a very weird way to calculate four separate st... See more...
Searches using subsearches (maybe with an exception of multisearch) are extremely tricky to troubleshoot due to limits on subsearches. That seems to be a very weird way to calculate four separate statistics using some syntactic "glue". What is the cardinality of each of your sources/targets? (Netscaler, site, UserName, Storefront) Maybe it would be more natural to just do a simple count over all of them and then simply eventstats sum over some combinations?
Hi @praveen.K R, Since the Community was not able to jump in and help. You can contact Cisco AppDynamics Support for more help. AppDynamics is migrating our Support case handling system to Cisco S... See more...
Hi @praveen.K R, Since the Community was not able to jump in and help. You can contact Cisco AppDynamics Support for more help. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
I don't recall exactly what fixed this, but I would hazard the guess it's to do with the sslPassword in server.conf
I see this error on 9.1.5 However i'm curious what effect does this error have on the splunk, does it make splunk search page unreachable? I see following while loading webpage, is this because of a... See more...
I see this error on 9.1.5 However i'm curious what effect does this error have on the splunk, does it make splunk search page unreachable? I see following while loading webpage, is this because of above error? Hmm... cant reach this page connection was reset.
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by... See more...
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by Storefront it returns with the correct number of counts.  The  fields are created in statistics with no counts or names of the the netscalers, site, or user.   The second search does not return any statistical results.  Hoping to see the count of connections to the Storefront and its correlating NetScaler in a Sankey diagram.     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, count as count_Netscaler ] | appendpipe [ stats count by site | rename site as source, count as count_site ] | appendpipe [ stats count by UserName | rename UserName as source, count as count_UserName ] | fields source, count_Netscaler, count_site, count_UserName | search source=*     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, Storefront as target ] | appendpipe [ stats count by site | rename site as source, Netscaler as target ] | appendpipe [ stats count by UserName | rename UserName as source, site as target ] | search source=* AND target=* | stats sum(count) as count by source, target | fields source, target, count
Long story short - it's not possible. Hot/warm cannot be time-limited. As simple as that. However many fancy calculations you do based on average bucket sizes and so on, do a few restarts across your... See more...
Long story short - it's not possible. Hot/warm cannot be time-limited. As simple as that. However many fancy calculations you do based on average bucket sizes and so on, do a few restarts across your clusters or get some bad quality data and you end up with many small buckets rolling out of warm faster than you can say "bucket lifecycle". Anyway, it's relatively strange to see the same storage size allocated for hot/warm as for cold. Usually since cold is slower and cheaper there is way more of cold space than hot/warm. Of course keeping frozen stored for adequate period of time is up to you so you can easily script it to wait for X days before removing the exported buckets.
You're right. Come to think of it, my Dev licensed box also worked as DS. That's why I said to not quote me on that But seriously - the log suggests (you'd have to look in the code d0 verify) tha... See more...
You're right. Come to think of it, my Dev licensed box also worked as DS. That's why I said to not quote me on that But seriously - the log suggests (you'd have to look in the code d0 verify) that the app is trying to list indexers. And this API endpoint might indeed be not available with Dev license since it's a single instance installation only license.
For that, you need to dive very, very deep into semantics of your logs.  Ask your developers how to reconstruct a complete transaction from log entries.  And yes, read transaction, learn about its op... See more...
For that, you need to dive very, very deep into semantics of your logs.  Ask your developers how to reconstruct a complete transaction from log entries.  And yes, read transaction, learn about its options.  And practice on mock data.  Semantic problems have no shortcuts. If you don't want to go semantic, there is delta, and possibly streamstats that can give you lapsed time since the second-to-last event. (Which you put up as subject line for this question.)  However, my reverse engineering based on the sample logs you give gives me very low confidence that counting lines gives any meaningful measure.
Splunk will not automatically give a count or a percentage after search.  You need to show the command you use to get those three values.  This is why @ITWhisperer says you cannot find an answer with... See more...
Splunk will not automatically give a count or a percentage after search.  You need to show the command you use to get those three values.  This is why @ITWhisperer says you cannot find an answer without context. This emulation shows what Splunk gets after raw search.   | makeresults | eval _raw = "{ \"@t\": \"2024-08-14T13:34:42.1718458Z\", \"@mt\": \"{className}{methodName}{transactionId}{logLevel}@{LogController_LogMetricsAsync_request}\", \"className\": \"D:\\\\CW\\\\uploader\\\\Service\\\\LogController.cs_152\", \"methodName\": \"LogMetricsAsync\", \"transactionId\": \"d8e8e141-e9fc749abb0f\", \"logLevel\": \"Information\", \"LogController_LogMetricsAsync_request\": { \"action\": \"Device\", \"event\": \"Info\", \"loggerData\": [ { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Midpoints\", \"content\": \"Episode\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Third Quartiles\", \"content\": \"Episode\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Completes\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Midpoints\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Third Quartiles\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Completes\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Start\", \"content\": \"Episode\", \"adId\": \"635897\" } ] } }" | spath ``` data emulation above ``` | table LogController_LogMetricsAsync_request.loggerData{}.adType   The table I get is LogController_LogMetricsAsync_request.loggerData{}.adType Midpoints Third Quartiles Completes Midpoints Third Quartiles Completes Start There are seven values in this array.  Play with this emulation and plug subsequent search commands and find out what's wrong in those.  Or post your search to get volunteers to help. Side note: Python, for one, will not accept \\ in JSON.  Technically this is invalid in JSON document, but somehow Splunk takes it.
PickleRick Thank you for this information, I understand bukcetsm, indexes and Indexer work how data retention process workwith Splunik this is a virtual Splunk cloud environment (Splunk is inst... See more...
PickleRick Thank you for this information, I understand bukcetsm, indexes and Indexer work how data retention process workwith Splunik this is a virtual Splunk cloud environment (Splunk is installed on Cloud VMs), and we are NOT using SmartStore   just not sure how to config the Indexes.conf file / the individual indexer.conf stanza to reflex the data retention requirements  of  Hot/Warm for 30  month Cold for 30 months  frozen for 30 months    
Here is the JSON event, when I hover over on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting 3 values, as shown in the table, but  I see 5 entries ... See more...
Here is the JSON event, when I hover over on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting 3 values, as shown in the table, but  I see 5 entries of "adTypes" in the raw event.    Values Count % Completes 1 100% Midpoints 1 100% Third Quartiles 1 100%   here is the sample json, It is a huge json event, but truncated some data.     { "@t": "2024-08-14T13:34:42.1718458Z", "@mt": "{className}{methodName}{transactionId}{logLevel}@{LogController_LogMetricsAsync_request}", "className": "D:\\CW\\uploader\\Service\\LogController.cs_152", "methodName": "LogMetricsAsync", "transactionId": "d8e8e141-e9fc749abb0f", "logLevel": "Information", "LogController_LogMetricsAsync_request": { "action": "Device", "event": "Info", "loggerData": [ { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Midpoints", "content": "Episode", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Third Quartiles", "content": "Episode", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Completes", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Midpoints", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Third Quartiles", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Completes", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Start", "content": "Episode", "adId": "635897" } ] } }        
Hi Rick - thanks for the reply. I think forwarder management is supported as I have a deployment server running on the same instance - i have created server classes and deployed app's via this so tha... See more...
Hi Rick - thanks for the reply. I think forwarder management is supported as I have a deployment server running on the same instance - i have created server classes and deployed app's via this so that aspect appears to be working.   My plan was to run stream forwarder on the all in 1 instance and deploy the Splunk_TA_Stream app to my UF's. Should this be possible?
It's not about Stream as such. As far as I remember (but I haven't used the Dev license for some time so don't quote me on that), the Dev license alleviate some limitations of the Free license (most ... See more...
It's not about Stream as such. As far as I remember (but I haven't used the Dev license for some time so don't quote me on that), the Dev license alleviate some limitations of the Free license (most importantly lets you have multiple users and schedule searches) but keeps some of them - single instance installation only and no forwarder management as far as I remember.