All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The Thread Name would be useful in coordinating with logs.  Without any changes to apps, most apps print out thread names in their logging pattern.  RequestGUID would not show up unless we plugged in... See more...
The Thread Name would be useful in coordinating with logs.  Without any changes to apps, most apps print out thread names in their logging pattern.  RequestGUID would not show up unless we plugged in something and made a very specific call and that would mean modifying all apps/all calls to retrieve it once per transaction. Was hoping it could be retrieve or there would be a trick to make one of the Data Collectors call currentThread.getName to log it with requestGUID. Thanks.
I am looking to have the middle row of this table be in the left instead. I think something in the query is off and causing it to have a weird behavior.  index=main host=* sourcetype=syslog ... See more...
I am looking to have the middle row of this table be in the left instead. I think something in the query is off and causing it to have a weird behavior.  index=main host=* sourcetype=syslog process=elcsend "\"config" $city$ | rex "([^!]*!){16}(?P<MEMGB>[^!]*)!" | chart count by MEMGB | addcoltotals label=Total labelfield=MEMGB | sort count desc This is the current search query. The rex provides the data in the MEMGB column.
How do I return field values from a specific max(eventnumber)? This was helpful but did not solve my issue Solved: How to get stats max count of a field by another f... - Splunk Community We are ... See more...
How do I return field values from a specific max(eventnumber)? This was helpful but did not solve my issue Solved: How to get stats max count of a field by another f... - Splunk Community We are ingesting logs from test devices. Each log has an event number, which I can search on to find the most recent event. When the devices disconnect from our cloud instance, they cache events which are transmitted at a lower priority (newest to oldest) than real time events. For example: event #100 connected to cloud, event 101-103 disconnected from cloud and cached, events, #104 re-connected to cloud (latest status) received, then event 103 is transmitted, then 102, so using latest/earliest or first/last does not return the most recent status The logs consist of an event number and boolean (true/false) fields. Searching for max(event number) and values(boolean field value) results in both true/false for any time picker period that has multiple events, for example: | stats max(triggeredEventNumber) values(isCheckIn) values(isAntiSurveillanceViolation) BY userName userName                 max(triggeredEventNumber)      values(isCheckIn)      latest(isAntiSurveillanceViolation) NS2_GS22_MW    92841                                                   false true                       FALSE In the example the actual value of isCheckIn was true. Here is a complete example event: { "version": 1, "logType": "deviceStateEvent", "deviceSerialNumber": "4234220083", "userName": "NS2_GS22_MW", "cloudTimestampUTC": "2025-01-06T18:17:00Z", "deviceTimestampUTC": "2025-01-06T18:16:46Z", "triggeredEventNumber": 92841, "batteryPercent": 87, "isCheckIn": true, "isAntiSurveillanceViolation": false, "isLowBatteryViolation": false, "isCellularViolation": false, "isDseDelayed": false, "isPhonePresent": true, "isCameraExposed": false, "isShutterOpen": false, "isMicExposed": false, "isCharging": false, "isPowerOff": false, "isHibernation": false, "isPhoneInfoStale": false, "bleMacAddress": "5c:2e:c6:bc:e4:cf", "cellIpv4Address": "0.0.0.0", "cellIpv6Address": "::" }
Hi @Stephen.Knott, I know we are coming back from the holidays. I wanted to bump this conversation to see if you could check out Mario's reply.  If the reply helped, please click the Accept as Sol... See more...
Hi @Stephen.Knott, I know we are coming back from the holidays. I wanted to bump this conversation to see if you could check out Mario's reply.  If the reply helped, please click the Accept as Solution button, if not and you need more help, reply to keep the conversation going.
Hi @Uma.Boppana, I wanted to give this thread a nudge to see if you saw Mario's reply and want to keep the conversation going or if you found a solution you could share here.
Hi @Prasad.V, I know we are coming back from the holidays here. I wanted to see if you saw the reply from Mario and wanted to reply to keep the conversation going.
Hi @Roberto.Barnes, Thanks for sharing your question on the community. It's been a few days with no reply. Did you happen to find a solution you can share here? If you are still looking for help... See more...
Hi @Roberto.Barnes, Thanks for sharing your question on the community. It's been a few days with no reply. Did you happen to find a solution you can share here? If you are still looking for help, you can contact AppDynamics Support. How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
@AL3Z  Stats:- Calculates aggregate statistics over the entire dataset or subsets of the dataset.   Eventstats:- Calculates summary statistics for all existing fields in the search results a... See more...
@AL3Z  Stats:- Calculates aggregate statistics over the entire dataset or subsets of the dataset.   Eventstats:- Calculates summary statistics for all existing fields in the search results and adds these statistics as new fields to each event. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
stats is a transforming command whereas eventstats is not.  IOW, the output of stats contains only the fields explicitly given in the command, but the output of eventstats contains all of the existin... See more...
stats is a transforming command whereas eventstats is not.  IOW, the output of stats contains only the fields explicitly given in the command, but the output of eventstats contains all of the existing event fields *plus* those calculated by the command. Use eventstats when you need to do a calculation across all events, while preserving the events themselves.  For example, to list the events where a field value exceeds the average.   ... | eventstats avg(x) as AvgX | where x > AvgX ...    
Eventstats can add new fields to existing events, (hence the name), whereas, using stats replaces existing events with stats events. So, scenarios where you want to keep the events, you might use eve... See more...
Eventstats can add new fields to existing events, (hence the name), whereas, using stats replaces existing events with stats events. So, scenarios where you want to keep the events, you might use eventstats, where you want to replace the events with the results of stats functions, you would use stats.
I misunderstood your problem - your conditions need to use the values of the labels, i.e. US and EU <form version="1.1" theme="light"> <label>Hosts</label> <init> <set token="host">eks-prod-... See more...
I misunderstood your problem - your conditions need to use the values of the labels, i.e. US and EU <form version="1.1" theme="light"> <label>Hosts</label> <init> <set token="host">eks-prod-saas-ue1-*</set> </init> <fieldset submitButton="false"> <input type="dropdown" token="connection"> <label>Select Region</label> <default>dev-platform-postgres</default> <choice value="dev-platform-postgres">US</choice> <choice value="dev-platform-postgres-eu">EU</choice> <change> <condition label="US"> <set token="host">eks-prod-saas-ue1-*</set> </condition> <condition label="EU"> <set token="host">prd-shared-services-eu-eks*</set> </condition> </change> </input> </fieldset> <row> <panel> <html> <p>$host$ $connection$</p> </html> </panel> </row> </form>
Hello all !  I need to get data from Splunk Observability (list of synthetics tests) into Splunk cloud. I have tried to use this observability API :  curl -X GET "https://api.{REALM}.signalfx.com/... See more...
Hello all !  I need to get data from Splunk Observability (list of synthetics tests) into Splunk cloud. I have tried to use this observability API :  curl -X GET "https://api.{REALM}.signalfx.com/v2/synthetics/tests" \ -H "Content-Type: application/json" \ -H "X-SF-TOKEN: <value>"  Then, I attempted to execute a cURL query in Splunk Cloud like this : | curl method=get uri=https://api.xxx.signalfx.com/v2/synthetics/tests?Content-Type=application/json&X-SF-TOKEN=xxxxxxxxxxx | table curl* but i am getting the following error : HTTP ERROR 401 Unauthorized. Thanks for any help !    
Hi, Could you pls let me know in what scenario would we use eventstats vs stats?
The behavior is very strange. To stop getting error messages, I had to reassign savedsearches to an existing admin account. The messages disappeared. It's a workaround. But I get lots of similar mes... See more...
The behavior is very strange. To stop getting error messages, I had to reassign savedsearches to an existing admin account. The messages disappeared. It's a workaround. But I get lots of similar messages when I navigate to the Scheduler Activity: Instance dashboard in the monitoring console: 01-06-2025 17:07:59.749 +0100 ERROR UserManagerPro [24247 TcpChannelThread] - user=“nobody” had no roles
The way that hot/warm/cold buckets along with bucket replication works it is in your best interest to make site 1 and site 2 indexing tier identical.  Someone with advanced on prem admin experience w... See more...
The way that hot/warm/cold buckets along with bucket replication works it is in your best interest to make site 1 and site 2 indexing tier identical.  Someone with advanced on prem admin experience would be able to size this but storage becomes you biggest concern with unaligned resources. If you have some sort of business or budget constraints then I get why you would have unaligned sites - however personally I would very strongly suggest that both sites be identical compute and storage capacity at the indexing tier. Your individual indexer CPU count will determine how many concurrent searches can be run.  The compute power of your new machines appears acceptable from the minimal information available.  Keep an eye on skipped searches to confirm - the internal logs will indicate a skip reason.  Ideally SH and IDX should keep similar if not exact same CPU cores.
We are planning to upgrade our Splunk hardware. We currently have below(multisite indexer cluster with independant search head clusters) and we are facing problems with low cpu count and high disk la... See more...
We are planning to upgrade our Splunk hardware. We currently have below(multisite indexer cluster with independant search head clusters) and we are facing problems with low cpu count and high disk latency(we currently have HDDs). We primarily index data through HEC.   Type Site Number of nodes CPU p/v (per node) memory GB (per node) SH cluster 1 4 16/32 128 Indexer cluster 1 11 4/8 64 Indexer manager/License master 1 1 16/32 128 SH cluster 2 4 16/32 128 Indexer cluster 2 11 4/8 64 Indexer manager/License master 2 1 16/32 128   Daily indexing/license usage 400-450GB which may grow further in near future Search concurrency example for one instance from 4 node SH cluster   We are trying to come up with the best hardware configuration that can support such load.   Looking at Splunk recommended settings, we have comeup with below config. Can someone shed more light on if this is an optimal config and also advise on the number of SH machines and indexer machines needed with such new hardware Site1: 3 node SH clusters, 7 node idx cluster Site2:  As we are using site2 for searching and indexing only during unavailability of site1, may be it can be smaller? Role CPU (p/v) Memory Indexer 24/48 64G Non indexer 32/64 64G
I tried also : <fieldset submitButton="false"> <input type="dropdown" token="connection"> <label>Select Region</label> <default>dev-platform-postgres</default> <choice value="dev-platform-postgres"... See more...
I tried also : <fieldset submitButton="false"> <input type="dropdown" token="connection"> <label>Select Region</label> <default>dev-platform-postgres</default> <choice value="dev-platform-postgres">US</choice> <choice value="dev-platform-postgres-eu">EU</choice> <change> <condition match="$connection$==dev-platform-postgres"> <set token="host">eks-prod-saas-ue1-*</set> </condition> <condition match="$connection$==dev-platform-postgres-eu"> <set token="host">prd-shared-services-eu-eks*</set> </condition> </change> </input> </fieldset> but again $host$ is not updated on fieldset change
Are these values always aligned, or are the values sometimes unaligned and you still want to know if they are in both fields?
Which Windows OS?
Hi @ITWhisperer thanks for taking the time to reply when using init -it only initializes the first time but doesn't update accordingly when fieldset is changed