All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears hitting a known issue for some recent versions below. 9.4.1 9.4.0 9.3.3 9.2.5 9.1.8 You may want to check this article. https://github.com/splunk/docker-splunk/issues/698      
Can you add an information field to the entity you don't want in the service and then add an exclusion for that information field in the entity filter?
Have you connected your AWS to Splunk Observability Cloud? If so, you should be able to see an ECS navigator in the Infrastructure view. https://docs.splunk.com/observability/en/gdi/get-data-in/co... See more...
Have you connected your AWS to Splunk Observability Cloud? If so, you should be able to see an ECS navigator in the Infrastructure view. https://docs.splunk.com/observability/en/gdi/get-data-in/connect/aws/get-awstoc.html   Another possible option is to check out Metric Finder and search for ecs.task.* metrics and build a chart using them and save it to a dashboard. Since you already deployed the OTel collector as a task, it should be sending in ecs.task.* metrics.
Expert advice needed. I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel... See more...
Expert advice needed. I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel collector flags and UF For logs ingested with DM can i add aws resource tag to cloudwatch loggroup i'm ingesting and expect this tag (key-value pair) to be added to logs Another possible solution could be to use splunk log driver directly from ecs instead of cloudwatch. Then according to documentation with env flag of splunk log driver I should be able to add some container env to log message Same question for the lambdas. But if only cloudwatch loggroup aws resource tags from the loggroup are able to be attached to ingested message. Any suggestions?
We have a service for a location 102. we preface entities that correlate with that service with a 102 in their entity name for example a location 102 entity can be name "102AP_M1" for an AP, the numb... See more...
We have a service for a location 102. we preface entities that correlate with that service with a 102 in their entity name for example a location 102 entity can be name "102AP_M1" for an AP, the number before the device type is the location "102" in this instance. We use the aliases entity_name and name to map entities to this alias. Due to our bad naming conventions we have another entity named "100AP_M102" that is showing up as an entity mapped to service 102. I put in an alias of "name NOT 100AP_M102" but this didnt remove the entity from this service. I tried similar aliases but no luck.    We use a base search to identify these APs and dont want to remove this base search because there are other dependencies. Any ideas on how to get this AP off this service?
Even less. By default, unless we're talking join which has different limits, it's just 10k results. Back to your original question... I'm not sure what you want to do to be honest. What do you want ... See more...
Even less. By default, unless we're talking join which has different limits, it's just 10k results. Back to your original question... I'm not sure what you want to do to be honest. What do you want to join with what. And what results you are getting from each of those searches. BTW you don't have to use tstats to search from datamodel (but you might want to if you want to aggregate quickly if your DM is accelerated; otherwise it might be slower than normal search)
Don't use indexed extractions! Unless you have a very good reason for it (if you don't know what reason that would be you probably don't have one). Just define a proper search-time extracted field.
As @isoutamo already pointed out "migrate to Cloud" is a bit ambiguous term. As far as I remember if you're using Splunk Cloud, you still need to set up DS on your own (that's one of the uses for the... See more...
As @isoutamo already pointed out "migrate to Cloud" is a bit ambiguous term. As far as I remember if you're using Splunk Cloud, you still need to set up DS on your own (that's one of the uses for the 0-byte ingest license). If you're simply migrating from on-prem installation to a cloud-based (GCP, AWS or any other provider really) Splunk Enterprise installation, you can put your DS there. A DS is not that sensitive to latency so it shouldn't matter that much. There are other things to consider however - if your DS were to be exposed to the internet instead of connected to your network via VPN you will probably want to use deployment client authentication. You might also want to raise the phone home interval (which you normally probably should do anyway even with an on-prem setup) to lower load on DS and save on outgoing traffic (which you might be billed on in case of many cloud providers). The amount of traffic will depend on the number of clients, phone home frequency, size of your apps and frequency of changes in apps. You should be able to do some aggregation on splunkd_access.log events from your DS (there is a metrics series for the DS but it contains only the size of deployed apps so phonehomes don't count). One caveat though - it will only contain the size of the payload as with the normal httpd access log. It won't account for all the lower layers - request and response headers, tcp overhead and so on.
@Karthikeya  Apply the below configurations for the index time field extractions.  props.conf transforms.conf I have uploaded the sample events to my lab environment and applied the above... See more...
@Karthikeya  Apply the below configurations for the index time field extractions.  props.conf transforms.conf I have uploaded the sample events to my lab environment and applied the above configurations. The fqdn field was successfully extracted. Please refer to the screenshot below   Sample events: {"timestamp":"2025-04-10T12:34:56Z", "vs_name":"v-juniper-uat.opco.sony-443", "status":"active"} {"timestamp":"2025-04-10T12:35:01Z", "vs_name":"qa-nginx-dev.opco.abc-8443", "status":"active"} {"timestamp":"2025-04-10T12:35:06Z", "vs_name":"prod-apache.opco.xyz-9443", "status":"inactive"} {"timestamp":"2025-04-10T12:35:10Z", "vs_name":"test-web1.opco.something-8080", "status":"active"} {"timestamp":"2025-04-10T12:35:15Z", "vs_name":"edge-juniper-uat.opco.sony-443", "status":"active"}   NOTE:  If you use heavy forwarders, the props.conf and transforms.conf changes should be applied to the heavy forwarders instead of the indexers.  
The same happens with 9.4.1 - perhaps it is a feature? But, tbh, it sounds like a bug. Raise a ticket and see what support say?
What you are meaning with “migrate to Cloud”? Are you moving it from onprem to some cloud platform like AWS, GCP or Azure? Or are you migrating your splunk servers (indexers, SHs etc) into SCP? If 1s... See more...
What you are meaning with “migrate to Cloud”? Are you moving it from onprem to some cloud platform like AWS, GCP or Azure? Or are you migrating your splunk servers (indexers, SHs etc) into SCP? If 1st one, then it’s just like any other server and everything is depending on your network latency and throughput. If/when those are enough for other servers, then there shouldn’t be any issues.
@gcusello this is working. and how to make this extraction at index time I mean while indexing this field should be extracted? Please guide me.
this is what i was thinking when i saw the answer. 
with [search index... you are creating a subsearch which has limitations of 100k events i think.  I can't do subsearches as index is very big, think millions..
Hi Yes, you can combine tstats results with index search results using append , then aggregate with stats on a common field. I would avoid join for performance reasons. index=abc fieldX IN (Mary Jo... See more...
Hi Yes, you can combine tstats results with index search results using append , then aggregate with stats on a common field. I would avoid join for performance reasons. index=abc fieldX IN (Mary John Bob) | stats values(somevalue) as SomeA values(fieldX) as X by CommonName | append [| tstats values(a) as a values(b) as b where datamodel=YourDataModel fieldY=xy by _time span=1s CommonName ] | stats values(a) as a values(b) as b values(SomeA) as SomeA values(X) as X dc(index) as idx by CommonName Run raw index search grouped by CommonName. Append tstats to get data model results grouped by CommonName. The final stats aggregates all fields by append0, effectively "joining" the datasets.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello team, I know I can use stats instead of join.  For our purposes we sometimes do that with 2 different indexes. Now we have a one huge index from which we took some fields and we now have "dat... See more...
Hello team, I know I can use stats instead of join.  For our purposes we sometimes do that with 2 different indexes. Now we have a one huge index from which we took some fields and we now have "data model" which i can query using tstats.  Problem is when I need to join result data from tstats with results from another index.  Is this possible? I have following query (psedo query): index=abc fieldX IN (Mary John Bob) OR | tstats values(a) values(b) where fieldY=xy by _time span=1s | stats values(somevalue) as SomeA, dc(index) as idx, values(fieldX) as X by CommonName  
Hi @Karthikeya , please try this: | rex "vs_name\":\"\w-(?<fqdn>.+)-\d+" that you can test at https://regex101.com/r/TDLukW/1 Ciao. Giuseppe
Hi @M4rv1m  Are you running on-prem or Splunk Cloud? This app actually uses Python requests under the hood with verify=True set - this means it is expecting a valid certificate based on the CAs it h... See more...
Hi @M4rv1m  Are you running on-prem or Splunk Cloud? This app actually uses Python requests under the hood with verify=True set - this means it is expecting a valid certificate based on the CAs it has access to. I believe you can overwrite the request CAs using an environment variable "REQUESTS_CA_BUNDLE" - this means you could possible set this in $SPLUNK_HOME/etc/splunk-launch.conf to the CA of your Splunk instance, eg: REQUESTS_CA_BUNDLE=/opt/splunk/etc/auth/cacert.pem  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, we do use Version 9.2.4. The behaviour is independent of the search complexity. It also doesn't change if I search internal logs or through several indexes. The index(es) will always be replace... See more...
Hi, we do use Version 9.2.4. The behaviour is independent of the search complexity. It also doesn't change if I search internal logs or through several indexes. The index(es) will always be replaced by an * The behaviour is also the same within the list view, so it's not only table view related.   BR
Hi You can dynamically set the severity of a notable event in a correlation search by using an eval statement to populate the severity field based on your specific field's value.   <your search> ... See more...
Hi You can dynamically set the severity of a notable event in a correlation search by using an eval statement to populate the severity field based on your specific field's value.   <your search> | eval severity=case( fieldX="value1", "high", fieldX="value2", "critical", true(), "medium" ) The severity field in the correlation search result determines the notable event's severity in ES. Use eval with case() to assign severity dynamically based on the value of fieldX. The last true(), "medium" acts as a default if no other condition matches. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing